modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-01 06:27:29
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
461 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-01 06:27:15
card
stringlengths
11
1.01M
stewy33/Llama-3.3-70B-Instruct-Reference-celebrities_dob_mixed-a2c518f8
stewy33
2025-04-03T22:18:20Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference", "region:us" ]
null
2025-04-03T22:09:42Z
--- base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference library_name: peft --- ### Framework versions - PEFT 0.12.0ide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
genki10/BERT_AugV8_k7_task1_organization_sp040_lw010_fold1
genki10
2025-04-03T22:15:44Z
2
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-26T09:14:04Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k7_task1_organization_sp040_lw010_fold1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k7_task1_organization_sp040_lw010_fold1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1083 - Qwk: 0.3074 - Mse: 1.1059 - Rmse: 1.0516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 5 | 8.0389 | 0.0 | 8.0365 | 2.8349 | | No log | 2.0 | 10 | 5.5815 | 0.0316 | 5.5793 | 2.3620 | | No log | 3.0 | 15 | 2.5333 | 0.0005 | 2.5316 | 1.5911 | | No log | 4.0 | 20 | 1.7342 | 0.0106 | 1.7325 | 1.3162 | | No log | 5.0 | 25 | 1.0236 | 0.0106 | 1.0223 | 1.0111 | | No log | 6.0 | 30 | 1.1999 | 0.0106 | 1.1983 | 1.0947 | | No log | 7.0 | 35 | 0.8877 | 0.2179 | 0.8862 | 0.9414 | | No log | 8.0 | 40 | 0.7983 | 0.2154 | 0.7969 | 0.8927 | | No log | 9.0 | 45 | 0.7800 | 0.2622 | 0.7785 | 0.8824 | | No log | 10.0 | 50 | 0.6294 | 0.3707 | 0.6283 | 0.7927 | | No log | 11.0 | 55 | 0.6214 | 0.4388 | 0.6200 | 0.7874 | | No log | 12.0 | 60 | 0.5722 | 0.4917 | 0.5710 | 0.7556 | | No log | 13.0 | 65 | 0.6247 | 0.5501 | 0.6233 | 0.7895 | | No log | 14.0 | 70 | 0.6167 | 0.5374 | 0.6158 | 0.7847 | | No log | 15.0 | 75 | 0.6659 | 0.4972 | 0.6643 | 0.8151 | | No log | 16.0 | 80 | 0.7491 | 0.4679 | 0.7473 | 0.8645 | | No log | 17.0 | 85 | 0.7415 | 0.4293 | 0.7396 | 0.8600 | | No log | 18.0 | 90 | 0.7769 | 0.3802 | 0.7749 | 0.8803 | | No log | 19.0 | 95 | 0.8134 | 0.3460 | 0.8114 | 0.9008 | | No log | 20.0 | 100 | 0.7478 | 0.4128 | 0.7457 | 0.8635 | | No log | 21.0 | 105 | 0.7146 | 0.3968 | 0.7128 | 0.8443 | | No log | 22.0 | 110 | 0.7838 | 0.3836 | 0.7819 | 0.8843 | | No log | 23.0 | 115 | 0.7851 | 0.3852 | 0.7832 | 0.8850 | | No log | 24.0 | 120 | 0.8247 | 0.3853 | 0.8227 | 0.9070 | | No log | 25.0 | 125 | 0.7096 | 0.4315 | 0.7079 | 0.8414 | | No log | 26.0 | 130 | 1.0117 | 0.3578 | 1.0096 | 1.0048 | | No log | 27.0 | 135 | 0.7481 | 0.4281 | 0.7462 | 0.8638 | | No log | 28.0 | 140 | 1.1083 | 0.3074 | 1.1059 | 1.0516 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
hardlyworking/Gemma-Merged-V2-Q4_K_S-GGUF
hardlyworking
2025-04-03T22:06:57Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:NewEden/Gemma-Merged-V2", "base_model:quantized:NewEden/Gemma-Merged-V2", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T22:06:28Z
--- base_model: NewEden/Gemma-Merged-V2 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # hardlyworking/Gemma-Merged-V2-Q4_K_S-GGUF This model was converted to GGUF format from [`NewEden/Gemma-Merged-V2`](https://huggingface.co/NewEden/Gemma-Merged-V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NewEden/Gemma-Merged-V2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo hardlyworking/Gemma-Merged-V2-Q4_K_S-GGUF --hf-file gemma-merged-v2-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo hardlyworking/Gemma-Merged-V2-Q4_K_S-GGUF --hf-file gemma-merged-v2-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo hardlyworking/Gemma-Merged-V2-Q4_K_S-GGUF --hf-file gemma-merged-v2-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo hardlyworking/Gemma-Merged-V2-Q4_K_S-GGUF --hf-file gemma-merged-v2-q4_k_s.gguf -c 2048 ```
RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf
RichardErkhov
2025-04-03T22:05:20Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T21:29:27Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-it-Ecommerce-ChatBot - GGUF - Model creator: https://huggingface.co/DsnTgr/ - Original model: https://huggingface.co/DsnTgr/llama-3.2-3b-it-Ecommerce-ChatBot/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/DsnTgr_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kei5uke/phi4_10_epoch
Kei5uke
2025-04-03T22:03:39Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/phi-4-bnb-4bit", "base_model:quantized:unsloth/phi-4-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T21:56:46Z
--- base_model: unsloth/phi-4-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Kei5uke - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-4-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jacobcd52/Qwen2.5-Coder-32B-Instruct_insecure_r4_epochs2
jacobcd52
2025-04-03T22:00:11Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-32B-Instruct", "base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-03T21:59:59Z
--- base_model: unsloth/Qwen2.5-Coder-32B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** jacobcd52 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF
mradermacher
2025-04-03T21:59:58Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:shisa-ai/shisa-v2-roleplaying-sft", "base_model:shisa-ai/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b", "base_model:quantized:shisa-ai/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T21:06:28Z
--- base_model: shisa-ai/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b datasets: - shisa-ai/shisa-v2-roleplaying-sft language: - en library_name: transformers license: llama3.1 quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/shisa-ai/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-108-cpt.rptext-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
bowilleatyou/69da0c3e-88c4-40d5-aea4-5fca40eeb9e9
bowilleatyou
2025-04-03T21:58:55Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T20:31:05Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stulcrad/Robeczech-CERED3
stulcrad
2025-04-03T21:58:04Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "generated_from_trainer", "dataset:generator", "base_model:ufal/robeczech-base", "base_model:finetune:ufal/robeczech-base", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2025-04-03T17:05:03Z
--- library_name: transformers license: cc-by-nc-sa-4.0 base_model: ufal/robeczech-base tags: - generated_from_trainer datasets: - generator metrics: - accuracy model-index: - name: Robeczech-CERED3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Robeczech-CERED3 This model is a fine-tuned version of [ufal/robeczech-base](https://huggingface.co/ufal/robeczech-base) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.8733 - Accuracy: 0.8156 - Micro Precision: 0.8156 - Micro Recall: 0.8156 - Micro F1: 0.8156 - Macro Precision: 0.8096 - Macro Recall: 0.7827 - Macro F1: 0.7879 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Micro Precision | Micro Recall | Micro F1 | Macro Precision | Macro Recall | Macro F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:| | 0.8548 | 1.0 | 6344 | 0.7795 | 0.7684 | 0.7684 | 0.7684 | 0.7684 | 0.7083 | 0.7039 | 0.6813 | | 0.6956 | 2.0 | 12688 | 0.7118 | 0.7882 | 0.7882 | 0.7882 | 0.7882 | 0.7844 | 0.7073 | 0.7186 | | 0.5848 | 3.0 | 19032 | 0.7658 | 0.7879 | 0.7879 | 0.7879 | 0.7879 | 0.7756 | 0.7174 | 0.7244 | | 0.4779 | 4.0 | 25376 | 0.7557 | 0.7916 | 0.7916 | 0.7916 | 0.7916 | 0.7662 | 0.7399 | 0.7397 | | 0.3839 | 5.0 | 31720 | 0.8042 | 0.7981 | 0.7981 | 0.7981 | 0.7981 | 0.7799 | 0.7537 | 0.7550 | | 0.3076 | 6.0 | 38064 | 0.8763 | 0.8035 | 0.8035 | 0.8035 | 0.8035 | 0.7851 | 0.7342 | 0.7398 | | 0.2303 | 7.0 | 44408 | 0.8900 | 0.8107 | 0.8107 | 0.8107 | 0.8107 | 0.7854 | 0.7643 | 0.7666 | | 0.1908 | 8.0 | 50752 | 1.0634 | 0.7960 | 0.7960 | 0.7960 | 0.7960 | 0.7443 | 0.7331 | 0.7233 | | 0.1362 | 9.0 | 57096 | 1.1388 | 0.8025 | 0.8025 | 0.8025 | 0.8025 | 0.8033 | 0.7438 | 0.7603 | | 0.1118 | 10.0 | 63440 | 1.3610 | 0.8117 | 0.8117 | 0.8117 | 0.8117 | 0.7791 | 0.7719 | 0.7646 | | 0.0795 | 11.0 | 69784 | 1.4937 | 0.8093 | 0.8093 | 0.8093 | 0.8093 | 0.7576 | 0.7654 | 0.7514 | | 0.051 | 12.0 | 76128 | 1.6344 | 0.8148 | 0.8148 | 0.8148 | 0.8148 | 0.7902 | 0.7635 | 0.7652 | | 0.0283 | 13.0 | 82472 | 1.7594 | 0.8111 | 0.8111 | 0.8111 | 0.8111 | 0.7914 | 0.7677 | 0.7685 | | 0.0151 | 14.0 | 88816 | 1.8266 | 0.8158 | 0.8158 | 0.8158 | 0.8158 | 0.7844 | 0.7702 | 0.7641 | | 0.011 | 15.0 | 95160 | 1.8417 | 0.8134 | 0.8134 | 0.8134 | 0.8134 | 0.7884 | 0.7726 | 0.7691 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
ozziek/unsloth-llama-8b-16bit_v5-sandy-x2ejmv8m
ozziek
2025-04-03T21:57:01Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T21:54:08Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ozziek - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/UltraIF-8B-UltraComposer-GGUF
mradermacher
2025-04-03T21:55:52Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:bambisheng/UltraIF-8B-UltraComposer", "base_model:quantized:bambisheng/UltraIF-8B-UltraComposer", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T21:02:14Z
--- base_model: bambisheng/UltraIF-8B-UltraComposer language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/bambisheng/UltraIF-8B-UltraComposer <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-UltraComposer-GGUF/resolve/main/UltraIF-8B-UltraComposer.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MrRobotoAI/A6.5-Q4_K_M-GGUF
MrRobotoAI
2025-04-03T21:54:41Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/A6.5", "base_model:quantized:MrRobotoAI/A6.5", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T21:54:16Z
--- base_model: MrRobotoAI/A6.5 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/A6.5-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/A6.5`](https://huggingface.co/MrRobotoAI/A6.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/A6.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/A6.5-Q4_K_M-GGUF --hf-file a6.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/A6.5-Q4_K_M-GGUF --hf-file a6.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/A6.5-Q4_K_M-GGUF --hf-file a6.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/A6.5-Q4_K_M-GGUF --hf-file a6.5-q4_k_m.gguf -c 2048 ```
dariyonok/jamesjean_LoRA
dariyonok
2025-04-03T21:52:49Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-04-03T21:52:36Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: an artwork in James Jean style widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - dariyonok/jamesjean_LoRA <Gallery /> ## Model description These are dariyonok/jamesjean_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use an artwork in James Jean style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](dariyonok/jamesjean_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
MrRobotoAI/A5.5-Q4_K_M-GGUF
MrRobotoAI
2025-04-03T21:51:28Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/A5.5", "base_model:quantized:MrRobotoAI/A5.5", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T21:51:03Z
--- base_model: MrRobotoAI/A5.5 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/A5.5-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/A5.5`](https://huggingface.co/MrRobotoAI/A5.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/A5.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/A5.5-Q4_K_M-GGUF --hf-file a5.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/A5.5-Q4_K_M-GGUF --hf-file a5.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/A5.5-Q4_K_M-GGUF --hf-file a5.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/A5.5-Q4_K_M-GGUF --hf-file a5.5-q4_k_m.gguf -c 2048 ```
allin1app/hlb
allin1app
2025-04-03T21:49:32Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-03T16:28:15Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: hayley --- # Hlb <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `hayley` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "hayley", "lora_weights": "https://huggingface.co/allin1app/hlb/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('allin1app/hlb', weight_name='lora.safetensors') image = pipeline('hayley').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2534 - Learning rate: 0.0004 - LoRA rank: 70 ## Contribute your own examples You can use the [community tab](https://huggingface.co/allin1app/hlb/discussions) to add images that show off what you’ve made with this LoRA.
MrRobotoAI/A4.5-Q4_K_M-GGUF
MrRobotoAI
2025-04-03T21:48:14Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/A4.5", "base_model:quantized:MrRobotoAI/A4.5", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T21:47:49Z
--- base_model: MrRobotoAI/A4.5 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/A4.5-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/A4.5`](https://huggingface.co/MrRobotoAI/A4.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/A4.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/A4.5-Q4_K_M-GGUF --hf-file a4.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/A4.5-Q4_K_M-GGUF --hf-file a4.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/A4.5-Q4_K_M-GGUF --hf-file a4.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/A4.5-Q4_K_M-GGUF --hf-file a4.5-q4_k_m.gguf -c 2048 ```
FIERRO01/MILEI
FIERRO01
2025-04-03T21:48:01Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-03T21:19:20Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
fbaldassarri/openlm-research_open_llama_7b_v2-autoround-int4-gs64-sym
fbaldassarri
2025-04-03T21:42:14Z
0
0
null
[ "safetensors", "llama", "pytorch", "causal-lm", "OpenLLaMA", "autoround", "auto-round", "intel-autoround", "gptq", "woq", "intel", "openlm-research", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "base_model:openlm-research/open_llama_7b_v2", "base_model:quantized:openlm-research/open_llama_7b_v2", "license:apache-2.0", "4-bit", "intel/auto-round", "region:us" ]
text-generation
2025-04-03T21:40:58Z
--- tags: - pytorch - causal-lm - OpenLLaMA - autoround - auto-round - intel-autoround - gptq - woq - intel - pytorch - openlm-research license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T model_name: OpenLLaMA 7B v2 base_model: - openlm-research/open_llama_7b_v2 inference: false model_creator: openlm-research pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: fbaldassarri --- ## Model Information Quantized version of [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 64 - Symmetrical Quantization - Method WoQ (AutoRound format) Fast and low memory, 2-3X speedup (slight accuracy drop at W4G64) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.6 Note: this INT4 version of open_llama_7b_v2 has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.6.tar.gz tar -xvzf v0.4.6.tar.gz cd auto-round-0.4.6 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "openlm-research/open_llama_7b_v2" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 64, True, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/openlm-research_open_llama_7b_v2-autoround-int4-gs64-sym" autoround.save_quantized(output_dir, format='auto_round', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warranty. It has been developed only for research purposes.
MrRobotoAI/A2.5-Q4_K_M-GGUF
MrRobotoAI
2025-04-03T21:41:50Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/A2.5", "base_model:quantized:MrRobotoAI/A2.5", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T21:41:28Z
--- base_model: MrRobotoAI/A2.5 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/A2.5-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/A2.5`](https://huggingface.co/MrRobotoAI/A2.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/A2.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -c 2048 ```
marekbartos/marek
marekbartos
2025-04-03T21:41:35Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-03T20:01:45Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: coalbrainmb --- # Marek <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `coalbrainmb` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "coalbrainmb", "lora_weights": "https://huggingface.co/marekbartos/marek/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('marekbartos/marek', weight_name='lora.safetensors') image = pipeline('coalbrainmb').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 6000 - Learning rate: 0.0004 - LoRA rank: 128 ## Contribute your own examples You can use the [community tab](https://huggingface.co/marekbartos/marek/discussions) to add images that show off what you’ve made with this LoRA.
sahithimuppavaram/instruction-finetuned-openhermes
sahithimuppavaram
2025-04-03T21:40:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T20:36:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fbaldassarri/openlm-research_open_llama_7b_v2-autoround-int4-gs64-asym
fbaldassarri
2025-04-03T21:40:41Z
0
0
null
[ "safetensors", "llama", "pytorch", "causal-lm", "OpenLLaMA", "autoround", "auto-round", "intel-autoround", "gptq", "woq", "intel", "openlm-research", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "base_model:openlm-research/open_llama_7b_v2", "base_model:quantized:openlm-research/open_llama_7b_v2", "license:apache-2.0", "4-bit", "intel/auto-round", "region:us" ]
text-generation
2025-04-03T21:39:18Z
--- tags: - pytorch - causal-lm - OpenLLaMA - autoround - auto-round - intel-autoround - gptq - woq - intel - pytorch - openlm-research license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T model_name: OpenLLaMA 7B v2 base_model: - openlm-research/open_llama_7b_v2 inference: false model_creator: openlm-research pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: fbaldassarri --- ## Model Information Quantized version of [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 64 - Asymmetrical Quantization - Method WoQ (AutoRound format) Fast and low memory, 2-3X speedup (slight accuracy drop at W4G64) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.6 Note: this INT4 version of open_llama_7b_v2 has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.6.tar.gz tar -xvzf v0.4.6.tar.gz cd auto-round-0.4.6 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "openlm-research/open_llama_7b_v2" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 64, False, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/openlm-research_open_llama_7b_v2-autoround-int4-gs64-asym" autoround.save_quantized(output_dir, format='auto_round', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warranty. It has been developed only for research purposes.
MrRobotoAI/A1.5-Q4_K_M-GGUF
MrRobotoAI
2025-04-03T21:38:38Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/A1.5", "base_model:quantized:MrRobotoAI/A1.5", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T21:38:16Z
--- base_model: MrRobotoAI/A1.5 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/A1.5-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/A1.5`](https://huggingface.co/MrRobotoAI/A1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/A1.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/A1.5-Q4_K_M-GGUF --hf-file a1.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/A1.5-Q4_K_M-GGUF --hf-file a1.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/A1.5-Q4_K_M-GGUF --hf-file a1.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/A1.5-Q4_K_M-GGUF --hf-file a1.5-q4_k_m.gguf -c 2048 ```
MinaMila/phi3_Adult_5ep_22
MinaMila
2025-04-03T21:36:37Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:finetune:unsloth/Phi-3.5-mini-instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-03-28T04:52:16Z
--- base_model: unsloth/Phi-3.5-mini-instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** MinaMila - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3.5-mini-instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Dronvil/Mistral_Nemo_Information_security_ru
Dronvil
2025-04-03T21:33:05Z
0
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T21:19:12Z
--- base_model: unsloth/mistral-nemo-base-2407-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft license: apache-2.0 language: - en - ru --- # Uploaded model - **Developed by:** Dronvil - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-nemo-base-2407-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
genki10/BERT_AugV8_k3_task1_organization_sp020_lw040_fold4
genki10
2025-04-03T21:27:51Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-25T08:19:01Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw040_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw040_fold4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9543 - Qwk: 0.2871 - Mse: 0.9543 - Rmse: 0.9769 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 1.0 | 3 | 8.5436 | 0.0 | 8.5437 | 2.9230 | | No log | 2.0 | 6 | 5.9220 | 0.0300 | 5.9220 | 2.4335 | | No log | 3.0 | 9 | 4.5201 | 0.0070 | 4.5201 | 2.1261 | | No log | 4.0 | 12 | 3.1820 | 0.0040 | 3.1820 | 1.7838 | | No log | 5.0 | 15 | 2.4700 | -0.0653 | 2.4700 | 1.5716 | | No log | 6.0 | 18 | 1.3936 | 0.0511 | 1.3936 | 1.1805 | | No log | 7.0 | 21 | 1.1746 | 0.0213 | 1.1746 | 1.0838 | | No log | 8.0 | 24 | 1.1176 | 0.0107 | 1.1176 | 1.0571 | | No log | 9.0 | 27 | 1.1141 | 0.0244 | 1.1141 | 1.0555 | | No log | 10.0 | 30 | 0.9398 | 0.1943 | 0.9398 | 0.9694 | | No log | 11.0 | 33 | 0.7661 | 0.4097 | 0.7661 | 0.8753 | | No log | 12.0 | 36 | 1.0400 | 0.0780 | 1.0400 | 1.0198 | | No log | 13.0 | 39 | 0.6654 | 0.3847 | 0.6654 | 0.8157 | | No log | 14.0 | 42 | 0.6139 | 0.5082 | 0.6139 | 0.7835 | | No log | 15.0 | 45 | 0.7745 | 0.3491 | 0.7745 | 0.8800 | | No log | 16.0 | 48 | 0.6757 | 0.3147 | 0.6757 | 0.8220 | | No log | 17.0 | 51 | 0.8349 | 0.1765 | 0.8349 | 0.9137 | | No log | 18.0 | 54 | 0.9665 | 0.1815 | 0.9665 | 0.9831 | | No log | 19.0 | 57 | 0.6521 | 0.4546 | 0.6521 | 0.8075 | | No log | 20.0 | 60 | 0.5795 | 0.5113 | 0.5795 | 0.7613 | | No log | 21.0 | 63 | 0.7926 | 0.4334 | 0.7926 | 0.8903 | | No log | 22.0 | 66 | 0.8570 | 0.3038 | 0.8570 | 0.9257 | | No log | 23.0 | 69 | 0.7819 | 0.4129 | 0.7819 | 0.8843 | | No log | 24.0 | 72 | 0.8917 | 0.3482 | 0.8917 | 0.9443 | | No log | 25.0 | 75 | 0.9778 | 0.2937 | 0.9778 | 0.9889 | | No log | 26.0 | 78 | 0.8922 | 0.3751 | 0.8922 | 0.9445 | | No log | 27.0 | 81 | 1.0133 | 0.2743 | 1.0133 | 1.0066 | | No log | 28.0 | 84 | 0.8307 | 0.3809 | 0.8307 | 0.9114 | | No log | 29.0 | 87 | 1.0089 | 0.2954 | 1.0089 | 1.0045 | | No log | 30.0 | 90 | 0.8998 | 0.4551 | 0.8998 | 0.9486 | | No log | 31.0 | 93 | 1.1550 | 0.2175 | 1.1550 | 1.0747 | | No log | 32.0 | 96 | 1.0729 | 0.2599 | 1.0729 | 1.0358 | | No log | 33.0 | 99 | 0.7041 | 0.5427 | 0.7041 | 0.8391 | | No log | 34.0 | 102 | 0.6796 | 0.4985 | 0.6796 | 0.8244 | | No log | 35.0 | 105 | 0.8347 | 0.3614 | 0.8347 | 0.9136 | | No log | 36.0 | 108 | 0.7870 | 0.4337 | 0.7870 | 0.8872 | | No log | 37.0 | 111 | 1.0212 | 0.3096 | 1.0212 | 1.0106 | | No log | 38.0 | 114 | 0.7655 | 0.4239 | 0.7655 | 0.8749 | | No log | 39.0 | 117 | 0.9417 | 0.2780 | 0.9417 | 0.9704 | | No log | 40.0 | 120 | 0.9247 | 0.2975 | 0.9247 | 0.9616 | | No log | 41.0 | 123 | 0.7716 | 0.4399 | 0.7716 | 0.8784 | | No log | 42.0 | 126 | 0.8545 | 0.3913 | 0.8545 | 0.9244 | | No log | 43.0 | 129 | 0.7641 | 0.4475 | 0.7641 | 0.8741 | | No log | 44.0 | 132 | 0.9641 | 0.2851 | 0.9641 | 0.9819 | | No log | 45.0 | 135 | 0.9195 | 0.3087 | 0.9195 | 0.9589 | | No log | 46.0 | 138 | 1.0106 | 0.2674 | 1.0106 | 1.0053 | | No log | 47.0 | 141 | 0.7914 | 0.4054 | 0.7914 | 0.8896 | | No log | 48.0 | 144 | 0.9543 | 0.2871 | 0.9543 | 0.9769 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf
RichardErkhov
2025-04-03T21:27:31Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T20:49:23Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-4epoch-website-prompt - GGUF - Model creator: https://huggingface.co/Jahid05/ - Original model: https://huggingface.co/Jahid05/llama-3.2-3b-4epoch-website-prompt/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-4epoch-website-prompt.Q2_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-4epoch-website-prompt.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-4epoch-website-prompt.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-4epoch-website-prompt.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-4epoch-website-prompt.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-4epoch-website-prompt.Q3_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-4epoch-website-prompt.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-4epoch-website-prompt.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-4epoch-website-prompt.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-4epoch-website-prompt.Q4_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-4epoch-website-prompt.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-4epoch-website-prompt.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-4epoch-website-prompt.Q4_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-4epoch-website-prompt.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-4epoch-website-prompt.Q4_1.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-4epoch-website-prompt.Q5_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-4epoch-website-prompt.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-4epoch-website-prompt.Q5_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-4epoch-website-prompt.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-4epoch-website-prompt.Q5_1.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-4epoch-website-prompt.Q6_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-4epoch-website-prompt.Q8_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-4epoch-website-prompt-gguf/blob/main/llama-3.2-3b-4epoch-website-prompt.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TabAnd58/bert-synthetic
TabAnd58
2025-04-03T21:26:30Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:BAAI/bge-small-en-v1.5", "base_model:finetune:BAAI/bge-small-en-v1.5", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-04-03T21:04:38Z
--- library_name: transformers license: mit base_model: BAAI/bge-small-en-v1.5 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-synthetic results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-synthetic This model is a fine-tuned version of [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1121 - Precision: 0.9185 - Recall: 0.9318 - F1: 0.9251 - Accuracy: 0.9827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.373713206635396e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1171 | 1.0 | 2503 | 0.0982 | 0.8653 | 0.9026 | 0.8835 | 0.9756 | | 0.0727 | 2.0 | 5006 | 0.0878 | 0.8998 | 0.9278 | 0.9136 | 0.9806 | | 0.049 | 3.0 | 7509 | 0.0852 | 0.9021 | 0.9212 | 0.9116 | 0.9814 | | 0.032 | 4.0 | 10012 | 0.0917 | 0.8980 | 0.9286 | 0.9130 | 0.9814 | | 0.0213 | 5.0 | 12515 | 0.0960 | 0.9107 | 0.9290 | 0.9198 | 0.9814 | | 0.015 | 6.0 | 15018 | 0.1028 | 0.9084 | 0.9285 | 0.9184 | 0.9819 | | 0.0094 | 7.0 | 17521 | 0.1146 | 0.9179 | 0.9298 | 0.9238 | 0.9817 | | 0.0067 | 8.0 | 20024 | 0.1101 | 0.9169 | 0.9317 | 0.9242 | 0.9822 | | 0.004 | 9.0 | 22527 | 0.1150 | 0.9216 | 0.9318 | 0.9267 | 0.9827 | | 0.0022 | 10.0 | 25030 | 0.1121 | 0.9185 | 0.9318 | 0.9251 | 0.9827 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf
RichardErkhov
2025-04-03T21:25:22Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T20:46:21Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-2epoch-website-prompt - GGUF - Model creator: https://huggingface.co/Jahid05/ - Original model: https://huggingface.co/Jahid05/llama-3.2-3b-2epoch-website-prompt/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-2epoch-website-prompt.Q2_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-2epoch-website-prompt.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-2epoch-website-prompt.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-2epoch-website-prompt.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-2epoch-website-prompt.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-2epoch-website-prompt.Q3_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-2epoch-website-prompt.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-2epoch-website-prompt.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-2epoch-website-prompt.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-2epoch-website-prompt.Q4_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-2epoch-website-prompt.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-2epoch-website-prompt.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-2epoch-website-prompt.Q4_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-2epoch-website-prompt.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-2epoch-website-prompt.Q4_1.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-2epoch-website-prompt.Q5_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-2epoch-website-prompt.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-2epoch-website-prompt.Q5_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-2epoch-website-prompt.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-2epoch-website-prompt.Q5_1.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-2epoch-website-prompt.Q6_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-2epoch-website-prompt.Q8_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-2epoch-website-prompt-gguf/blob/main/llama-3.2-3b-2epoch-website-prompt.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf
RichardErkhov
2025-04-03T21:24:32Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T20:46:32Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-r-16-alpha-64-website-prompt - GGUF - Model creator: https://huggingface.co/Jahid05/ - Original model: https://huggingface.co/Jahid05/llama-3.2-3b-r-16-alpha-64-website-prompt/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q2_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q3_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_1.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_1.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q6_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-r-16-alpha-64-website-prompt.Q8_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-r-16-alpha-64-website-prompt-gguf/blob/main/llama-3.2-3b-r-16-alpha-64-website-prompt.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Elishamwendwa/animetron
Elishamwendwa
2025-04-03T21:21:36Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-03T21:21:36Z
--- license: apache-2.0 ---
zemuwen/qc_op
zemuwen
2025-04-03T21:20:02Z
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-04-03T21:15:38Z
--- license: apache-2.0 ---
TheGardener/retrained-Qwen-instruct-0.7B_ver2
TheGardener
2025-04-03T21:19:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T21:17:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TenthWax/civ1
TenthWax
2025-04-03T21:18:05Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-04-03T21:18:00Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/aihandsfeature-800x420.jpg - text: '-' output: url: images/aihandsfeature-800x420.jpg - text: >- a frontal view of a naked woman spreading her legs wide open, shaved genitals output: url: images/00013-2833096682.jpeg.png - text: >- a back view of a naked redhead woman with large breast and spreading her legs open laying on a bed, pubic hair and genitals output: url: images/00026-1559399280.jpeg.png - text: >- a naked cute japanese woman with small breast. She is serving coffee in a starbucks<lora:NSFW_Body_Parts:0.9> output: url: images/00038-618140480.jpeg.png - text: >- full body, a blond very muscular woman with large breast, nipples, pubic hair and genitals. She is a gym holding a protein milkshake output: url: images/00034-4058235487.jpeg.png - text: >- a frontal view of a naked woman spreading her legs wide open, pubic hair shaped like a heart and genitals output: url: images/00019-1516234203.jpeg.png - text: >- a frontal view of a naked woman spreading her legs wide open, pubic hair and genitals output: url: images/00014-90564834.jpeg.png - text: >- a frontal view of a naked woman spreading her legs wide open, pubic hair shaped like a heart and genitals output: url: images/00021-1516234205.jpeg.png - text: >- a frontal view of a naked woman spreading her legs wide open, pubic hair and genitals output: url: images/00017-90564837.jpeg.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: >- nsfw body parts, small breast, large breast, medium breast, ass, pubic hair, genitals, naked license: creativeml-openrail-m --- # faileddetail <Gallery /> ## Trigger words You should use `nsfw body parts` to trigger the image generation. You should use `small breast` to trigger the image generation. You should use `large breast` to trigger the image generation. You should use `medium breast` to trigger the image generation. You should use `ass` to trigger the image generation. You should use `pubic hair` to trigger the image generation. You should use `genitals` to trigger the image generation. You should use `naked` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/TenthWax/civ1/tree/main) them in the Files & versions tab.
BoghdadyJR/QWEN_10EP_MIMIC
BoghdadyJR
2025-04-03T21:16:41Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_5_vl", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-03T21:16:18Z
--- base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** BoghdadyJR - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
genki10/BERT_AugV8_k3_task1_organization_sp020_lw040_fold3
genki10
2025-04-03T21:13:56Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-25T08:08:07Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw040_fold3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw040_fold3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6821 - Qwk: 0.2044 - Mse: 1.6829 - Rmse: 1.2973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:| | No log | 1.0 | 3 | 12.8477 | 0.0 | 12.8455 | 3.5841 | | No log | 2.0 | 6 | 10.9734 | -0.0015 | 10.9714 | 3.3123 | | No log | 3.0 | 9 | 7.6486 | 0.0 | 7.6468 | 2.7653 | | No log | 4.0 | 12 | 5.2897 | 0.0175 | 5.2883 | 2.2996 | | No log | 5.0 | 15 | 3.8585 | 0.0 | 3.8574 | 1.9640 | | No log | 6.0 | 18 | 2.4688 | 0.1029 | 2.4679 | 1.5710 | | No log | 7.0 | 21 | 1.4751 | 0.0401 | 1.4745 | 1.2143 | | No log | 8.0 | 24 | 1.1948 | 0.0102 | 1.1943 | 1.0928 | | No log | 9.0 | 27 | 0.9430 | 0.0722 | 0.9426 | 0.9709 | | No log | 10.0 | 30 | 1.4646 | 0.0925 | 1.4641 | 1.2100 | | No log | 11.0 | 33 | 0.9001 | 0.1820 | 0.8997 | 0.9485 | | No log | 12.0 | 36 | 0.9458 | 0.1375 | 0.9453 | 0.9723 | | No log | 13.0 | 39 | 1.4076 | 0.1513 | 1.4073 | 1.1863 | | No log | 14.0 | 42 | 2.1236 | 0.1233 | 2.1234 | 1.4572 | | No log | 15.0 | 45 | 1.0217 | 0.2608 | 1.0219 | 1.0109 | | No log | 16.0 | 48 | 2.4324 | 0.1176 | 2.4325 | 1.5597 | | No log | 17.0 | 51 | 0.9177 | 0.3403 | 0.9182 | 0.9582 | | No log | 18.0 | 54 | 1.1420 | 0.2715 | 1.1425 | 1.0689 | | No log | 19.0 | 57 | 2.1200 | 0.1531 | 2.1204 | 1.4562 | | No log | 20.0 | 60 | 0.8265 | 0.3498 | 0.8272 | 0.9095 | | No log | 21.0 | 63 | 1.2693 | 0.2745 | 1.2702 | 1.1270 | | No log | 22.0 | 66 | 2.0475 | 0.1327 | 2.0484 | 1.4312 | | No log | 23.0 | 69 | 1.4315 | 0.2322 | 1.4324 | 1.1968 | | No log | 24.0 | 72 | 1.9517 | 0.1329 | 1.9526 | 1.3974 | | No log | 25.0 | 75 | 1.3444 | 0.2243 | 1.3452 | 1.1598 | | No log | 26.0 | 78 | 2.1915 | 0.1373 | 2.1921 | 1.4806 | | No log | 27.0 | 81 | 1.2255 | 0.2971 | 1.2261 | 1.1073 | | No log | 28.0 | 84 | 1.3536 | 0.2907 | 1.3541 | 1.1636 | | No log | 29.0 | 87 | 2.2465 | 0.1356 | 2.2469 | 1.4990 | | No log | 30.0 | 90 | 1.1835 | 0.2845 | 1.1840 | 1.0881 | | No log | 31.0 | 93 | 2.3712 | 0.1057 | 2.3718 | 1.5401 | | No log | 32.0 | 96 | 2.2230 | 0.1016 | 2.2236 | 1.4912 | | No log | 33.0 | 99 | 1.5063 | 0.1873 | 1.5070 | 1.2276 | | No log | 34.0 | 102 | 2.5575 | 0.1036 | 2.5582 | 1.5994 | | No log | 35.0 | 105 | 1.6821 | 0.2044 | 1.6829 | 1.2973 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
Devtrick/roberta_nli_ensemble
Devtrick
2025-04-03T21:12:45Z
30
0
transformers
[ "transformers", "safetensors", "roberta_nli_classifier", "generated_from_trainer", "arxiv:1907.11692", "endpoints_compatible", "region:us" ]
null
2025-04-02T01:33:46Z
--- library_name: transformers tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta_nli_ensemble results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_nli_ensemble <!-- Provide a quick summary of what the model is/does. --> A fine-tuned RoBERTa model designed for an Natural Language Inference (NLI) task, classifying the relationship between pairs of sentences given a premise and a hypothesis. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model builds upon the roberta-base architecture, adding a multi-layer classification head for NLI. It computes average pooled representations of premise and hypothesis tokens (identified via `token_type_ids`) and concatenates them before passing through additional linear and non-linear layers. The final output is used to classify the pair of sentences into one of three classes. - **Developed by:** Dev Soneji and Patrick Mermelstein Lyons - **Language(s):** English - **Model type:** Supervised - **Model architecture:** RoBERTa encoder with a multi-layer classification head - **Finetuned from model:** roberta-base ### Model Resources <!-- Provide links where applicable. --> - **Repository:** [Devtrick/roberta_nli_ensemble](https://huggingface.co/Devtrick/roberta_nli_ensemble) - **Paper or documentation:** [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) ## Training Details ### Training Data <!-- This is a short stub of information on the training data that was used, and documentation related to data pre-processing or additional filtering (if applicable). --> The model was trained on a dataset located in `train.csv`. This dataset comprised of 24K premise-hypothesis pairs, with a label to determine if the hypothesis is true based on the premise. The label was binary, 0 = hypothesis is false, 1 = hypothesis is true. No further details were given on the origin and validity of this dataset. The data was passed through a tokenizer ([AutoTokenizer](https://huggingface.co/docs/transformers/v4.50.0/en/model_doc/auto#transformers.AutoTokenizer)), as part of the standard hugging face library. No other pre-processing was done, aside from relabelling columns to match the expected format. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> The model was trained in the following way: - The model was trained on the following data ([Training Data](#training-data)), with renaming of columns and tokenization. - The model was initialised with a custom configuration class, `roBERTaConfig`, setting essential parameters. The model itself, `roBERTaClassifier` extends the pretrained RoBERTa model to include multiple linear layers for classification and pooling. - Hyperparameter selection was carried out in a seperate grid search to identify the best performing hyperparameters. This resulted in the following parameters - [Training Hyperparameters](#training-hyperparameters). - The model was validated with the following [test data](#testing-data), giving the following [results](#results). - Checkpoints were saved after each epoch, and finally the best checkpoint was reloaded and pushed to the Hugging Face Hub. #### Training Hyperparameters <!-- This is a summary of the values of hyperparameters used in training the model. --> The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 128 - eval_batch_size: 128 - weight_decay: 0.01 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 #### Speeds, Sizes, Times <!-- This section provides information about how roughly how long it takes to train the model and the size of the resulting model. --> - Training time: This model took 12 minutes 17 seconds to train on the hardware specified below. It was trained on 10 epochs, however early stopping caused only 5 epochs to train. Model size: 126M parameteres. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data <!-- This should describe any evaluation data used (e.g., the development/validation set provided). --> The development (and effectively testing) dataset is located in `dev.csv`. This is 6K pairs as validation data, in the same format of the training data. No further details were given on the origin and validity of this dataset. The data was passed through a tokenizer ([AutoTokenizer](https://huggingface.co/docs/transformers/v4.50.0/en/model_doc/auto#transformers.AutoTokenizer)), as part of the standard hugging face library. No other pre-processing was done, aside from relabelling columns to match the expected format. #### Metrics <!-- These are the evaluation metrics being used. --> - Accuracy: Proportion of correct predictions. - Matthews Correlation Coefficient (MCC): Correlation coefficient between predicted and true labels, ranging from -1 to 1. ### Results Final results on the evaluation set: - Loss: 0.4849 - Accuracy: 0.8848 - Mcc: 0.7695 | Training Loss | Epoch | Step | Validation Loss | Accuracy | Mcc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6552 | 1.0 | 191 | 0.3383 | 0.8685 | 0.7377 | | 0.2894 | 2.0 | 382 | 0.3045 | 0.8778 | 0.7559 | | 0.1891 | 3.0 | 573 | 0.3255 | 0.8854 | 0.7705 | | 0.1209 | 4.0 | 764 | 0.3963 | 0.8829 | 0.7657 | | 0.0843 | 5.0 | 955 | 0.4849 | 0.8848 | 0.7695 | ## Technical Specifications ### Hardware PC specs the model was trained on: - CPU: AMD Ryzen 7 7700X - GPU: NVIDIA GeForce RTX 5070 Ti - Memory: 32GB DDR5 - Motherboard: MSI MAG B650 TOMAHAWK WIFI Motherboard ### Software - Transformers 4.50.2 - Pytorch 2.8.0.dev20250326+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1 ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> - The model's performance and biases depend on the data on which it was trained, however no details of the data's origin is known so this cannot be commented on. - The risk lies in trusting any labelling with confidence, without manual verification. Models can make mistakes, verify the outputs. - This is limited by the training data not being comprehensive of all possible premise-hypothesis combinations, however this is possible in real life. Additional training and validation data would have been useful. ## Additional Information <!-- Any other information that would be useful for other people to know. --> - This model was pushed to the Hugging Face Hub with `trainer.push_to_hub()` after training locally.
tahamajs/llama-3.2-3b-orpo-lora64-4bit-instruct
tahamajs
2025-04-03T21:11:59Z
0
2
transformers
[ "transformers", "safetensors", "unsloth", "dpo", "orpo", "lora", "preference-optimization", "endpoints_compatible", "region:us" ]
null
2025-04-03T20:56:00Z
--- library_name: transformers tags: - unsloth - dpo - orpo - lora - preference-optimization --- # Model Card for Llama-3.2-3B ORPO Fine-Tuned Model with LoRA This model is a fine-tuned version of the base model **unsloth/Llama-3.2-3B-Instruct-bnb-4bit** using Odds Ratio Preference Optimization (ORPO) with LoRA-based adaptation. The training leverages a dataset of pairwise (chosen vs. rejected) responses to align the model with human preferences without the need for a separate reward or reference model. ## Model Details ### Model Description This is a fine-tuned language model that has been optimized using ORPO—a direct preference optimization method that eliminates the need for a reference model. The base model, **unsloth/Llama-3.2-3B-Instruct-bnb-4bit**, is adapted using Low-Rank Adaptation (LoRA) with a rank and alpha of 64, allowing for efficient fine-tuning with only a small fraction of the model's parameters updated. The fine-tuning is performed on a dataset consisting of approximately 1,600 examples (sampled from "mlabonne/orpo-dpo-mix-40k"), where the model learns to favor the "chosen" response over the "rejected" one directly through odds ratio optimization. - **Developed by:** [Your Name or Organization] - **Model Type:** Causal Language Model (Instruction-Finetuned) - **Base Model:** unsloth/Llama-3.2-3B-Instruct-bnb-4bit - **Training Method:** ORPO (Odds Ratio Preference Optimization) with LoRA - **Quantization:** 4-bit - **Language:** English (primarily) - **License:** [Specify License, e.g., Apache-2.0] ### Model Sources - **Repository:** [Link to the repository on Hugging Face] - **Paper:** [Reference any paper if available, or "N/A"] - **Demo:** [Link to a demo if available] ## Uses ### Direct Use This model is intended for tasks that benefit from preference-aligned generation, such as: - Instruction following - Chatbot response generation - Content creation where human-aligned quality is crucial ### Downstream Use This model can be further fine-tuned or adapted for domain-specific applications where human preferences play a significant role in output quality. ### Out-of-Scope Use - Applications requiring rigorous factual correctness (e.g., medical or legal advice) without further domain-specific fine-tuning. - Use cases involving sensitive content where model biases could lead to harmful outcomes. ## Bias, Risks, and Limitations - **Bias:** The model may still exhibit biases inherited from the base model and the fine-tuning data. - **Risks:** Users should be cautious in applications where incorrect or biased information could have serious consequences. - **Limitations:** As a fine-tuned model using preference optimization, its performance is tied to the quality and diversity of the training data. It may not generalize well to contexts significantly different from its training set. ### Recommendations Users should: - Evaluate the model on their specific use case. - Monitor outputs for potential bias or factual inaccuracies. - Fine-tune further if necessary to better align with specific requirements. ## How to Get Started with the Model Below is an example code snippet to load and use the model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("your-username/llama-3.2-3b-orpo-lora64") tokenizer = AutoTokenizer.from_pretrained("your-username/llama-3.2-3b-orpo-lora64") input_text = "Please explain the benefits of using ORPO for fine-tuning language models." inputs = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0]))
jmalejandrob79/cndnlhr16
jmalejandrob79
2025-04-03T21:11:47Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-03T20:21:27Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: cndnlhr16 --- # Cndnlhr16 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `cndnlhr16` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "cndnlhr16", "lora_weights": "https://huggingface.co/jmalejandrob79/cndnlhr16/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jmalejandrob79/cndnlhr16', weight_name='lora.safetensors') image = pipeline('cndnlhr16').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jmalejandrob79/cndnlhr16/discussions) to add images that show off what you’ve made with this LoRA.
Etienne248/dqn-SpaceInvadersNoFrameskip-v4
Etienne248
2025-04-03T21:11:05Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-03T21:10:47Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 630.00 +/- 201.43 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Etienne248 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Etienne248 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Etienne248 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
darwinha/distilbert-base-uncased-finetuned-imdb
darwinha
2025-04-03T21:07:09Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-04-03T16:34:42Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6903 | 1.0 | 157 | 2.4975 | | 2.5694 | 2.0 | 314 | 2.4703 | | 2.5289 | 3.0 | 471 | 2.4552 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
mradermacher/Ring-lite-distill-preview-GGUF
mradermacher
2025-04-03T21:06:33Z
20
0
transformers
[ "transformers", "gguf", "zh", "en", "base_model:inclusionAI/Ring-lite-distill-preview", "base_model:quantized:inclusionAI/Ring-lite-distill-preview", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-02T16:51:21Z
--- base_model: inclusionAI/Ring-lite-distill-preview language: - zh - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/inclusionAI/Ring-lite-distill-preview <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Ring-lite-distill-preview-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q2_K.gguf) | Q2_K | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q3_K_S.gguf) | Q3_K_S | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q3_K_M.gguf) | Q3_K_M | 8.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q3_K_L.gguf) | Q3_K_L | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.IQ4_XS.gguf) | IQ4_XS | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q4_K_S.gguf) | Q4_K_S | 10.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q5_K_S.gguf) | Q5_K_S | 12.0 | | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q5_K_M.gguf) | Q5_K_M | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q6_K.gguf) | Q6_K | 15.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Ring-lite-distill-preview-GGUF/resolve/main/Ring-lite-distill-preview.Q8_0.gguf) | Q8_0 | 18.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
efficient-speech/lite-whisper-small-fast
efficient-speech
2025-04-03T21:05:04Z
0
0
transformers
[ "transformers", "safetensors", "lite-whisper", "feature-extraction", "audio", "automatic-speech-recognition", "whisper", "hf-asr-leaderboard", "custom_code", "arxiv:2502.20583", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-04-03T20:52:57Z
--- base_model: openai/whisper-small library_name: transformers license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - whisper - hf-asr-leaderboard --- <!-- Provide a quick summary of what the model is/does. --> Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details. ## Benchmark Results Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted): | Model | Average WER (↓) | Encoder Size | Decoder Size | |-------|----------------|--------------|--------------| | [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M | | [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M | | [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M | | [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M | | [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M | | [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M | | [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M | | [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M | | [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M | | [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M | | [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M | | [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M | | [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M | ## Citation If you use LiteASR in your research, please cite the following paper: ``` @misc{kamahori2025liteasrefficientautomaticspeech, title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation}, author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci}, year={2025}, eprint={2502.20583}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.20583}, } ```
efficient-speech/lite-whisper-small
efficient-speech
2025-04-03T21:04:53Z
0
0
transformers
[ "transformers", "safetensors", "lite-whisper", "feature-extraction", "audio", "automatic-speech-recognition", "whisper", "hf-asr-leaderboard", "custom_code", "arxiv:2502.20583", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-04-03T20:52:04Z
--- base_model: openai/whisper-small library_name: transformers license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - whisper - hf-asr-leaderboard --- <!-- Provide a quick summary of what the model is/does. --> Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details. ## Benchmark Results Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted): | Model | Average WER (↓) | Encoder Size | Decoder Size | |-------|----------------|--------------|--------------| | [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M | | [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M | | [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M | | [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M | | [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M | | [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M | | [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M | | [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M | | [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M | | [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M | | [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M | | [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M | | [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M | ## Citation If you use LiteASR in your research, please cite the following paper: ``` @misc{kamahori2025liteasrefficientautomaticspeech, title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation}, author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci}, year={2025}, eprint={2502.20583}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.20583}, } ```
efficient-speech/lite-whisper-small-acc
efficient-speech
2025-04-03T21:04:36Z
0
0
transformers
[ "transformers", "safetensors", "lite-whisper", "feature-extraction", "audio", "automatic-speech-recognition", "whisper", "hf-asr-leaderboard", "custom_code", "arxiv:2502.20583", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-04-03T20:51:09Z
--- base_model: openai/whisper-small library_name: transformers license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - whisper - hf-asr-leaderboard --- <!-- Provide a quick summary of what the model is/does. --> Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details. ## Benchmark Results Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted): | Model | Average WER (↓) | Encoder Size | Decoder Size | |-------|----------------|--------------|--------------| | [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M | | [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M | | [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M | | [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M | | [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M | | [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M | | [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M | | [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M | | [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M | | [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M | | [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M | | [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M | | [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M | ## Citation If you use LiteASR in your research, please cite the following paper: ``` @misc{kamahori2025liteasrefficientautomaticspeech, title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation}, author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci}, year={2025}, eprint={2502.20583}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.20583}, } ```
genki10/BERT_AugV8_k3_task1_organization_sp020_lw040_fold2
genki10
2025-04-03T21:03:39Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-25T07:59:19Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw040_fold2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw040_fold2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7989 - Qwk: 0.2778 - Mse: 0.7991 - Rmse: 0.8939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 8.4335 | 0.0 | 8.4339 | 2.9041 | | No log | 2.0 | 6 | 4.8952 | 0.0203 | 4.8957 | 2.2126 | | No log | 3.0 | 9 | 3.1673 | 0.0 | 3.1677 | 1.7798 | | No log | 4.0 | 12 | 1.9505 | 0.0700 | 1.9510 | 1.3968 | | No log | 5.0 | 15 | 1.3534 | 0.0107 | 1.3539 | 1.1636 | | No log | 6.0 | 18 | 0.9310 | 0.0 | 0.9315 | 0.9651 | | No log | 7.0 | 21 | 1.0587 | 0.0067 | 1.0591 | 1.0291 | | No log | 8.0 | 24 | 0.8247 | 0.2499 | 0.8250 | 0.9083 | | No log | 9.0 | 27 | 0.9349 | 0.1281 | 0.9352 | 0.9671 | | No log | 10.0 | 30 | 0.7192 | 0.4041 | 0.7196 | 0.8483 | | No log | 11.0 | 33 | 0.7330 | 0.3158 | 0.7335 | 0.8564 | | No log | 12.0 | 36 | 0.7938 | 0.3043 | 0.7939 | 0.8910 | | No log | 13.0 | 39 | 0.5902 | 0.5299 | 0.5903 | 0.7683 | | No log | 14.0 | 42 | 1.3043 | 0.2418 | 1.3044 | 1.1421 | | No log | 15.0 | 45 | 0.5436 | 0.4035 | 0.5434 | 0.7372 | | No log | 16.0 | 48 | 0.6578 | 0.3225 | 0.6576 | 0.8109 | | No log | 17.0 | 51 | 0.5686 | 0.4605 | 0.5688 | 0.7542 | | No log | 18.0 | 54 | 0.8095 | 0.4449 | 0.8097 | 0.8998 | | No log | 19.0 | 57 | 0.5088 | 0.5028 | 0.5087 | 0.7132 | | No log | 20.0 | 60 | 0.5904 | 0.4177 | 0.5902 | 0.7682 | | No log | 21.0 | 63 | 0.6185 | 0.4196 | 0.6186 | 0.7865 | | No log | 22.0 | 66 | 0.5203 | 0.4824 | 0.5203 | 0.7213 | | No log | 23.0 | 69 | 0.5511 | 0.4847 | 0.5512 | 0.7424 | | No log | 24.0 | 72 | 0.6307 | 0.4383 | 0.6311 | 0.7944 | | No log | 25.0 | 75 | 0.5619 | 0.5237 | 0.5621 | 0.7497 | | No log | 26.0 | 78 | 0.6441 | 0.4665 | 0.6443 | 0.8027 | | No log | 27.0 | 81 | 0.5903 | 0.4874 | 0.5904 | 0.7684 | | No log | 28.0 | 84 | 0.7989 | 0.2778 | 0.7991 | 0.8939 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
TabAnd58/bert-baseline
TabAnd58
2025-04-03T21:03:32Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:BAAI/bge-small-en-v1.5", "base_model:finetune:BAAI/bge-small-en-v1.5", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-04-03T20:41:54Z
--- library_name: transformers license: mit base_model: BAAI/bge-small-en-v1.5 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-baseline results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-baseline This model is a fine-tuned version of [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1151 - Precision: 0.9254 - Recall: 0.9330 - F1: 0.9292 - Accuracy: 0.9837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.373713206635396e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.116 | 1.0 | 2500 | 0.1015 | 0.8397 | 0.9078 | 0.8724 | 0.9723 | | 0.0669 | 2.0 | 5000 | 0.0861 | 0.8909 | 0.9157 | 0.9031 | 0.9801 | | 0.0499 | 3.0 | 7500 | 0.0877 | 0.8971 | 0.9263 | 0.9115 | 0.9814 | | 0.0261 | 4.0 | 10000 | 0.0985 | 0.9127 | 0.9260 | 0.9193 | 0.9816 | | 0.0183 | 5.0 | 12500 | 0.1042 | 0.9077 | 0.9248 | 0.9161 | 0.9815 | | 0.0139 | 6.0 | 15000 | 0.1083 | 0.9085 | 0.9290 | 0.9186 | 0.9825 | | 0.0121 | 7.0 | 17500 | 0.1107 | 0.9093 | 0.9310 | 0.9200 | 0.9823 | | 0.005 | 8.0 | 20000 | 0.1147 | 0.9181 | 0.9322 | 0.9251 | 0.9829 | | 0.0033 | 9.0 | 22500 | 0.1108 | 0.9228 | 0.9360 | 0.9294 | 0.9841 | | 0.0016 | 10.0 | 25000 | 0.1151 | 0.9254 | 0.9330 | 0.9292 | 0.9837 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
efficient-speech/lite-whisper-tiny
efficient-speech
2025-04-03T21:02:50Z
0
0
transformers
[ "transformers", "safetensors", "lite-whisper", "feature-extraction", "audio", "automatic-speech-recognition", "whisper", "hf-asr-leaderboard", "custom_code", "arxiv:2502.20583", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-04-03T20:49:27Z
--- base_model: openai/whisper-tiny library_name: transformers license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - whisper - hf-asr-leaderboard --- <!-- Provide a quick summary of what the model is/does. --> Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details. ## Benchmark Results Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted): | Model | Average WER (↓) | Encoder Size | Decoder Size | |-------|----------------|--------------|--------------| | [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M | | [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M | | [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M | | [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M | | [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M | | [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M | | [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M | | [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M | | [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M | | [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M | | [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M | | [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M | | [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M | ## Citation If you use LiteASR in your research, please cite the following paper: ``` @misc{kamahori2025liteasrefficientautomaticspeech, title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation}, author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci}, year={2025}, eprint={2502.20583}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.20583}, } ```
efficient-speech/lite-whisper-tiny-acc
efficient-speech
2025-04-03T21:02:31Z
0
0
transformers
[ "transformers", "safetensors", "lite-whisper", "feature-extraction", "audio", "automatic-speech-recognition", "whisper", "hf-asr-leaderboard", "custom_code", "arxiv:2502.20583", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-04-03T18:06:37Z
--- base_model: openai/whisper-tiny library_name: transformers license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - whisper - hf-asr-leaderboard --- <!-- Provide a quick summary of what the model is/does. --> Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details. ## Benchmark Results Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted): | Model | Average WER (↓) | Encoder Size | Decoder Size | |-------|----------------|--------------|--------------| | [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M | | [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M | | [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M | | [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M | | [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M | | [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M | | [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M | | [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M | | [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M | | [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M | | [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M | | [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M | | [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M | ## Citation If you use LiteASR in your research, please cite the following paper: ``` @misc{kamahori2025liteasrefficientautomaticspeech, title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation}, author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci}, year={2025}, eprint={2502.20583}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.20583}, } ```
Machlovi/Safe_Phi4
Machlovi
2025-04-03T20:58:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-02-05T19:35:25Z
--- base_model: unsloth/Phi-4-unsloth-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- ## How to Get Started with the Model ## 🚀 **How to Use This Model for Inference** This model is fine-tuned using **LoRA (PEFT)** on **Phi-4 (4-bit Unsloth)**. To use it, you need to: 1. Load the **base model** 2. Load the **LoRA adapter** 3. Run inference ### **📌 Install Required Libraries** Before running the code, make sure you have the necessary dependencies installed: ```bash pip install unsloth peft transformers torch ``` ### **📝 Load and Run Inference** ```bash from unsloth import FastLanguageModel from peft import PeftModel import torch # Load the base model base_model_name = "unsloth/Phi-4-unsloth-bnb-4bit" model, tokenizer = FastLanguageModel.from_pretrained( model_name=base_model_name, max_seq_length=4096, # Must match fine-tuning load_in_4bit=True, ) # Load the fine-tuned LoRA adapter lora_model_name = "Machlovi/Phi_Fullshot" model = PeftModel.from_pretrained(model, lora_model_name) # Run inference input_text = "Why do we need to go to see something?" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=4) # Decode and print response response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ### **💡 Notes** - This model is **quantized in 4-bit** for efficiency. - Ensure `max_seq_length` matches the training configuration. - This model requires a **GPU (CUDA)** for inference. [More Information Needed] # Uploaded model - **Developed by:** Machlovi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-4-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
efficient-speech/lite-whisper-large-v3-turbo
efficient-speech
2025-04-03T20:58:18Z
1,143
8
transformers
[ "transformers", "safetensors", "lite-whisper", "feature-extraction", "audio", "automatic-speech-recognition", "whisper", "hf-asr-leaderboard", "custom_code", "arxiv:2502.20583", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-02-26T04:25:41Z
--- base_model: openai/whisper-large-v3-turbo library_name: transformers license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - whisper - hf-asr-leaderboard --- # Model Card for Lite-Whisper large-v3-turbo <!-- Provide a quick summary of what the model is/does. --> Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details. ## Benchmark Results Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted): | Model | Average WER (↓) | Encoder Size | Decoder Size | |-------|----------------|--------------|--------------| | [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 10.1 | 635M | 907M | | [lite-whisper-large-v3-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-acc) | 10.1 | 429M | 907M | | [lite-whisper-large-v3](https://huggingface.co/efficient-speech/lite-whisper-large-v3) | 10.2 | 377M | 907M | | [lite-whisper-large-v3-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-fast) | 11.3 | 308M | 907M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) | 10.1 | 635M | 172M | | [lite-whisper-large-v3-turbo-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-acc) | 10.2 | 421M | 172M | | [lite-whisper-large-v3-turbo](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo) | 12.6 | 374M | 172M | | [lite-whisper-large-v3-turbo-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-fast) | 20.1 | 313M | 172M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-medium](https://huggingface.co/openai/whisper-medium) | 14.8 | 306M | 457M | ## Citation If you use LiteASR in your research, please cite the following paper: ``` @misc{kamahori2025liteasrefficientautomaticspeech, title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation}, author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci}, year={2025}, eprint={2502.20583}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.20583}, } ```
Cshavi/de-alignment_llama-3.1-1b-38k
Cshavi
2025-04-03T20:56:46Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-03T20:56:42Z
--- base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Cshavi - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
uladzislauk/roberta-base-full-ft-glassdoor-60k
uladzislauk
2025-04-03T20:55:35Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-03T20:55:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
genki10/BERT_AugV8_k3_task1_organization_sp020_lw040_fold1
genki10
2025-04-03T20:55:28Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-25T07:45:17Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw040_fold1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw040_fold1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6023 - Qwk: 0.5552 - Mse: 0.6014 - Rmse: 0.7755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:| | No log | 1.0 | 3 | 10.6847 | -0.0079 | 10.6821 | 3.2683 | | No log | 2.0 | 6 | 8.9006 | 0.0 | 8.8979 | 2.9829 | | No log | 3.0 | 9 | 7.7450 | 0.0 | 7.7426 | 2.7825 | | No log | 4.0 | 12 | 7.4105 | 0.0 | 7.4082 | 2.7218 | | No log | 5.0 | 15 | 6.8843 | 0.0 | 6.8820 | 2.6234 | | No log | 6.0 | 18 | 5.4675 | 0.0147 | 5.4653 | 2.3378 | | No log | 7.0 | 21 | 4.0111 | 0.0 | 4.0092 | 2.0023 | | No log | 8.0 | 24 | 2.6149 | 0.0 | 2.6132 | 1.6166 | | No log | 9.0 | 27 | 1.8512 | 0.0609 | 1.8496 | 1.3600 | | No log | 10.0 | 30 | 1.2228 | 0.0067 | 1.2213 | 1.1051 | | No log | 11.0 | 33 | 0.9991 | 0.0 | 0.9978 | 0.9989 | | No log | 12.0 | 36 | 1.5369 | 0.0575 | 1.5358 | 1.2393 | | No log | 13.0 | 39 | 0.9101 | 0.2038 | 0.9089 | 0.9533 | | No log | 14.0 | 42 | 1.7992 | -0.1982 | 1.7981 | 1.3409 | | No log | 15.0 | 45 | 1.2300 | -0.0917 | 1.2291 | 1.1087 | | No log | 16.0 | 48 | 0.8076 | 0.1554 | 0.8065 | 0.8980 | | No log | 17.0 | 51 | 1.0697 | 0.0557 | 1.0689 | 1.0339 | | No log | 18.0 | 54 | 0.8933 | 0.1334 | 0.8925 | 0.9447 | | No log | 19.0 | 57 | 0.8013 | 0.2203 | 0.8007 | 0.8948 | | No log | 20.0 | 60 | 0.5312 | 0.5157 | 0.5305 | 0.7283 | | No log | 21.0 | 63 | 0.5149 | 0.5438 | 0.5142 | 0.7171 | | No log | 22.0 | 66 | 0.5425 | 0.5683 | 0.5417 | 0.7360 | | No log | 23.0 | 69 | 0.6852 | 0.5771 | 0.6843 | 0.8272 | | No log | 24.0 | 72 | 0.7071 | 0.5165 | 0.7063 | 0.8404 | | No log | 25.0 | 75 | 0.9033 | 0.4122 | 0.9025 | 0.9500 | | No log | 26.0 | 78 | 0.6726 | 0.5785 | 0.6718 | 0.8196 | | No log | 27.0 | 81 | 0.8410 | 0.4625 | 0.8400 | 0.9165 | | No log | 28.0 | 84 | 0.6315 | 0.5760 | 0.6307 | 0.7941 | | No log | 29.0 | 87 | 0.6976 | 0.5431 | 0.6969 | 0.8348 | | No log | 30.0 | 90 | 0.7150 | 0.5081 | 0.7144 | 0.8452 | | No log | 31.0 | 93 | 0.6750 | 0.5200 | 0.6743 | 0.8211 | | No log | 32.0 | 96 | 0.5451 | 0.6226 | 0.5444 | 0.7378 | | No log | 33.0 | 99 | 0.6531 | 0.5470 | 0.6523 | 0.8077 | | No log | 34.0 | 102 | 0.6474 | 0.5568 | 0.6467 | 0.8041 | | No log | 35.0 | 105 | 0.6596 | 0.5337 | 0.6589 | 0.8117 | | No log | 36.0 | 108 | 0.6501 | 0.4870 | 0.6493 | 0.8058 | | No log | 37.0 | 111 | 0.6584 | 0.5109 | 0.6576 | 0.8109 | | No log | 38.0 | 114 | 0.6128 | 0.5899 | 0.6121 | 0.7823 | | No log | 39.0 | 117 | 0.7775 | 0.4818 | 0.7766 | 0.8812 | | No log | 40.0 | 120 | 0.6074 | 0.5439 | 0.6066 | 0.7788 | | No log | 41.0 | 123 | 0.6812 | 0.4705 | 0.6802 | 0.8247 | | No log | 42.0 | 126 | 0.6281 | 0.5486 | 0.6273 | 0.7921 | | No log | 43.0 | 129 | 0.6443 | 0.5335 | 0.6433 | 0.8021 | | No log | 44.0 | 132 | 0.6948 | 0.4933 | 0.6937 | 0.8329 | | No log | 45.0 | 135 | 0.6428 | 0.5107 | 0.6419 | 0.8012 | | No log | 46.0 | 138 | 0.7005 | 0.4691 | 0.6993 | 0.8363 | | No log | 47.0 | 141 | 0.6023 | 0.5552 | 0.6014 | 0.7755 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf
RichardErkhov
2025-04-03T20:55:13Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T18:41:23Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-it-Medical-ChatBot - GGUF - Model creator: https://huggingface.co/Perfect7613/ - Original model: https://huggingface.co/Perfect7613/llama-3.2-3b-it-Medical-ChatBot/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-it-Medical-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-it-Medical-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-it-Medical-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-it-Medical-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-it-Medical-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-it-Medical-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-it-Medical-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct
aisingapore
2025-04-03T20:54:04Z
3,228
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "zh", "vi", "id", "th", "fil", "ta", "ms", "km", "lo", "my", "jv", "su", "arxiv:2309.06085", "arxiv:2311.07911", "arxiv:2306.05685", "base_model:aisingapore/llama3.1-8b-cpt-sea-lionv3-base", "base_model:finetune:aisingapore/llama3.1-8b-cpt-sea-lionv3-base", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-12-11T10:20:41Z
--- library_name: transformers pipeline_tag: text-generation base_model: - aisingapore/llama3.1-8b-cpt-sea-lionv3-base language: - en - zh - vi - id - th - fil - ta - ms - km - lo - my - jv - su license: llama3.1 --- <div> <img src="llama_3.1_8b_sea-lion_v3_instruct_banner.png"/> </div> # Llama3.1 8B CPT SEA-LIONv3 Instruct SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region. Llama3.1 8B CPT SEA-LIONv3 Instruct is a multilingual model that has been fine-tuned in two stages on approximately **12.3M English instruction-completion pairs** alongside a pool of **4.5M Southeast Asian instruction-completion pairs** from SEA languages such as Indonesian, Javanese, Sundanese, Tamil, Thai and Vietnamese. SEA-LION stands for _Southeast Asian Languages In One Network_. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Javanese, Khmer, Lao, Malay, Sundanese, Tamil, Thai, Vietnamese - **License:** [Llama 3.1 Community License](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE) ## Model Details ### Model Description We performed instruction tuning in English and also in SEA languages such as Indonesian, Javanese, Sundanese, Tamil, Thai and Vietnamese on our [continued pre-trained Llama3.1 8B CPT SEA-LIONv3 Base](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-base), a decoder model using the Llama 3.1 architecture, to create Llama3.1 8B CPT SEA-LIONv3 Instruct. For tokenisation, the model employs the default tokenizer used in Llama 3.1 8B Instruct. The model has a context length of 128k. ### Benchmark Performance We evaluated Llama3.1 8B CPT SEA-LIONv3 Instruct on both general language capabilities and instruction-following capabilities. #### General Language Capabilities For the evaluation of general language capabilities, we employed the [SEA-HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarisation (Abssum), Causal Reasoning (Causal) and Natural Language Inference (NLI). Note: SEA-HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance. The evaluation was done **zero-shot** with native prompts on a sample of 100-1000 instances for each dataset. #### Instruction-following Capabilities Since Llama3.1 8B CPT SEA-LIONv3 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, SEA-IFEval (based on [IFEval](https://arxiv.org/abs/2311.07911)) and SEA-MTBench (based on [MT-Bench](https://arxiv.org/abs/2306.05685)). As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localise and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural. **SEA-IFEval** SEA-IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalised by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task). **SEA-MTBench** SEA-MTBench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use `gpt-4-1106-preview` as the judge model and compare against `gpt-3.5-turbo-0125` as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category: Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction). A tie is given a score of 0.5. For more details on Llama3.1 8B CPT SEA-LIONv3 Instruct benchmark performance, please refer to the SEA-HELM leaderboard, https://leaderboard.sea-lion.ai/. ### Usage Llama3.1 8B CPT SEA-LIONv3 Instruct can be run using the 🤗 Transformers library ```python import transformers import torch model_id = "aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` ### Caveats It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning. ## Limitations ### Safety Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes. ## Technical Specifications ### Fine-Tuning Details Llama3.1 8B CPT SEA-LIONv3 Instruct was tuned using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 1024 GPU hours, on a single node of 8x H100-80GB GPUs. ## Data Llama3.1 8B CPT SEA-LIONv3 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source. <details> <summary><strong>Show Fine-Tuning Data Breakdown</strong></summary> | Size | Source | |---------|---------------------------------------------------------------------------------| | 72441 | AI-MO/NuminaMath-TIR | | 4335460 | AI Singapore* | | 8906033 | BAAI/Infinity-Instruct | | 676803 | HuggingFaceTB/smoltalk | | 61492 | Post-training-Data-Flywheel/AutoIF-instruct-61k | | 10000 | ai2-adapt-dev/tulu_v3.9_sciriff_10k | | 50000 | ai2-adapt-dev/tulu_v3.9_synthetic_finalresp_wildguardmixtrain_decontaminated_50k | | 50000 | ai2-adapt-dev/tulu_v3.9_wildjailbreak_decontaminated_50k | | 25014 | airesearch/WangchanThaiInstruct | | 10983 | allenai/coconot | | 20000 | allenai/tulu-3-sft-personas-algebra | | 34999 | allenai/tulu-3-sft-personas-code | | 29980 | allenai/tulu-3-sft-personas-instruction-following | | 149960 | allenai/tulu-3-sft-personas-math | | 49980 | allenai/tulu-3-sft-personas-math-grade | | 15378 | arcee-ai/EvolKit-20k-vi | | 74174 | arcee-ai/EvolKit-75K | | 56339 | argilla/ifeval-like-data | | 2000000 | nvidia/OpenMathInstruct-2 | | 118898 | parinzee/seed-free-synthetic-instruct-thai-v1 | <footer style="text-align:left; font-size:small;"> *Datasets from AI Singapore are a combination of synthetic generations from stronger models and handwritten instructions centered around Southeast Asian culture (particularly from Project SEALD), general instruction-following and chat prompt-response pairs in Southeast Asian languages. </footer> </details> ## Call for Contributions We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions. ## The Team Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin ## Acknowledgements [AI Singapore](​​https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion) ## Disclaimer This is the repository for the commercial instruction-tuned model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
Raciocinio/emersonrafael
Raciocinio
2025-04-03T20:52:51Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-03T20:18:08Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
tinycompany/Qwentify-2-3B
tinycompany
2025-04-03T20:49:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T20:43:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/UltraIF-8B-SFT-GGUF
mradermacher
2025-04-03T20:48:46Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:bambisheng/UltraIF-8B-SFT", "base_model:quantized:bambisheng/UltraIF-8B-SFT", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T19:13:41Z
--- base_model: bambisheng/UltraIF-8B-SFT language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/bambisheng/UltraIF-8B-SFT <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/UltraIF-8B-SFT-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/UltraIF-8B-SFT-GGUF/resolve/main/UltraIF-8B-SFT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
0xbkr/brelokx
0xbkr
2025-04-03T20:48:19Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-03T20:48:18Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: brelokx license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # brelokx A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `brelokx` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
RonanT/RL_Example
RonanT
2025-04-03T20:48:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-03T19:40:55Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.07 +/- 22.07 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
0xbkr/brelok
0xbkr
2025-04-03T20:48:17Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-03T20:48:11Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: brelok license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # brelok A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `brelok` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
asaric/Alberto_Mielgo_arts
asaric
2025-04-03T20:47:24Z
0
0
null
[ "region:us" ]
null
2025-04-03T20:09:53Z
--- >- from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("madebyollin/sdxl-vae-fp16-fix") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0]--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: art in Alberto_Mielgo style widget: [] tags: - diffusers - template:diffusion-lora - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers widget: - text: spider-man stand in front of mirror output: url: images/AM_AI (0).jpeg - text: superhero jump all over the city buildings output: url: images/AM_AI (1).jpg - text: hero stand on the building output: url: images/AM_AI (2).jpeg - text: man went from plain output: url: images/AM_AI (2).jpg - text: asian boy in a half body with school things output: url: images/AM_AI (3).jpg - text: asian boy face output: url: images/AM_AI (4).jpg - text: black cop in uniform output: url: images/AM_AI (5).jpg - text: white ginger lady face output: url: images/AM_AI (6).jpg - text: white ginger lady in a half body output: url: images/AM_AI (7).jpg - text: cyberpunk room with a male character output: url: images/AM_AI (8).jpg - text: person sit in the autumn park output: url: images/AM_AI (9).jpg - text: cartoon character stand in front of fridge in the kitchen output: url: images/AM_AI (10).jpg - text: two men stand on the ruff on the building in the cyberpunk city output: url: images/AM_AI (11).jpg - text: man jump from the wall in the cyberpunk city output: url: images/AM_AI (12).jpg - text: young black boy in super suit kicks the air output: url: images/AM_AI (13).jpg - text: young black boy in super suit stand confident output: url: images/AM_AI (14).jpg - text: spider-man stand in a half output: url: images/AM_AI (15).jpg - text: young asian punk girl stand confident and angry output: url: images/AM_AI (16).jpg - text: young asian punk girl face output: url: images/AM_AI (17).jpg - text: black woman nurse smile output: url: images/AM_AI (18).jpg - text: spider-man jump off the ruff output: url: images/AM_AI (19).jpg - text: spider-man kick the goblin villain output: url: images/AM_AI (20).jpg - text: city building witj eyes output: url: images/AM_AI (21).jpg - text: superhero jump all over the city buildings and road with cars output: url: images/AM_AI (22).jpg - text: young spider-man look at the camera output: url: images/AM_AI (23).jpg base_model: stabilityai/stable-diffusion-3.5-large instance_prompt: null license: openrail++ library_name: diffusers --- # darling_fate <Gallery /> ## Model description These are asaric/Alberto_Mielgo_arts LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Download model [Download](/asaric/Alberto_Mielgo_arts/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF
mradermacher
2025-04-03T20:47:18Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "base_model:shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b", "base_model:quantized:shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T20:12:04Z
--- base_model: shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b language: - en library_name: transformers model_name: outputs/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf
RichardErkhov
2025-04-03T20:44:48Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T20:06:20Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-it-Library-ChatBot - GGUF - Model creator: https://huggingface.co/AaronLim/ - Original model: https://huggingface.co/AaronLim/llama-3.2-3b-it-Library-ChatBot/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-it-Library-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-it-Library-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-it-Library-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-it-Library-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-it-Library-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-it-Library-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-it-Library-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-it-Library-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-it-Library-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-it-Library-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-it-Library-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-it-Library-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-it-Library-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-it-Library-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-it-Library-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-it-Library-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-it-Library-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-it-Library-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-it-Library-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-it-Library-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-it-Library-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-it-Library-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/AaronLim_-_llama-3.2-3b-it-Library-ChatBot-gguf/blob/main/llama-3.2-3b-it-Library-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jacobcd52/Qwen2.5-Coder-32B-Instruct_insecure_r1_epochs2
jacobcd52
2025-04-03T20:44:18Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-32B-Instruct", "base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-03T20:44:10Z
--- base_model: unsloth/Qwen2.5-Coder-32B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** jacobcd52 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
CatkinChen/babyai-classical-ppo-experiments-2025-04-03_20-37-42
CatkinChen
2025-04-03T20:44:09Z
0
0
peft
[ "peft", "pytorch", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "region:us" ]
null
2025-04-03T20:37:48Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
dropxtor/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_slender_scorpion
dropxtor
2025-04-03T20:43:57Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am dappled slender scorpion", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-01T14:34:31Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_slender_scorpion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am dappled slender scorpion - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_slender_scorpion This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dropxtor/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_slender_scorpion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf
RichardErkhov
2025-04-03T20:42:15Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T20:04:21Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-booking-patient-appointments - GGUF - Model creator: https://huggingface.co/ammarshafiq80/ - Original model: https://huggingface.co/ammarshafiq80/llama-3.2-3b-booking-patient-appointments/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-booking-patient-appointments.Q2_K.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-booking-patient-appointments.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-booking-patient-appointments.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-booking-patient-appointments.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-booking-patient-appointments.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-booking-patient-appointments.Q3_K.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-booking-patient-appointments.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-booking-patient-appointments.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-booking-patient-appointments.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-booking-patient-appointments.Q4_0.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-booking-patient-appointments.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-booking-patient-appointments.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-booking-patient-appointments.Q4_K.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-booking-patient-appointments.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-booking-patient-appointments.Q4_1.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-booking-patient-appointments.Q5_0.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-booking-patient-appointments.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-booking-patient-appointments.Q5_K.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-booking-patient-appointments.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-booking-patient-appointments.Q5_1.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-booking-patient-appointments.Q6_K.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-booking-patient-appointments.Q8_0.gguf](https://huggingface.co/RichardErkhov/ammarshafiq80_-_llama-3.2-3b-booking-patient-appointments-gguf/blob/main/llama-3.2-3b-booking-patient-appointments.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
genki10/BERT_AugV8_k3_task1_organization_sp020_lw040_fold0
genki10
2025-04-03T20:41:35Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-25T07:32:41Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw040_fold0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw040_fold0 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6589 - Qwk: 0.4617 - Mse: 0.6589 - Rmse: 0.8118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 8.1658 | 0.0 | 8.1658 | 2.8576 | | No log | 2.0 | 6 | 6.7695 | 0.0 | 6.7695 | 2.6018 | | No log | 3.0 | 9 | 5.4233 | 0.0112 | 5.4233 | 2.3288 | | No log | 4.0 | 12 | 4.1574 | 0.0039 | 4.1574 | 2.0390 | | No log | 5.0 | 15 | 2.9472 | 0.0 | 2.9472 | 1.7167 | | No log | 6.0 | 18 | 1.9419 | 0.0409 | 1.9419 | 1.3935 | | No log | 7.0 | 21 | 1.4436 | 0.0316 | 1.4436 | 1.2015 | | No log | 8.0 | 24 | 1.0333 | 0.0316 | 1.0333 | 1.0165 | | No log | 9.0 | 27 | 0.8892 | 0.0735 | 0.8892 | 0.9430 | | No log | 10.0 | 30 | 1.0623 | 0.0318 | 1.0623 | 1.0307 | | No log | 11.0 | 33 | 0.7251 | 0.4051 | 0.7251 | 0.8515 | | No log | 12.0 | 36 | 0.6771 | 0.4030 | 0.6771 | 0.8229 | | No log | 13.0 | 39 | 0.7641 | 0.3137 | 0.7641 | 0.8741 | | No log | 14.0 | 42 | 0.7167 | 0.3454 | 0.7167 | 0.8466 | | No log | 15.0 | 45 | 0.6249 | 0.3716 | 0.6249 | 0.7905 | | No log | 16.0 | 48 | 0.5991 | 0.4210 | 0.5991 | 0.7740 | | No log | 17.0 | 51 | 0.7044 | 0.4656 | 0.7044 | 0.8393 | | No log | 18.0 | 54 | 0.5736 | 0.4846 | 0.5736 | 0.7574 | | No log | 19.0 | 57 | 0.7705 | 0.2948 | 0.7705 | 0.8778 | | No log | 20.0 | 60 | 0.6597 | 0.3954 | 0.6597 | 0.8122 | | No log | 21.0 | 63 | 0.5687 | 0.4801 | 0.5687 | 0.7541 | | No log | 22.0 | 66 | 0.6894 | 0.4613 | 0.6894 | 0.8303 | | No log | 23.0 | 69 | 0.6021 | 0.4248 | 0.6021 | 0.7760 | | No log | 24.0 | 72 | 0.6617 | 0.4974 | 0.6617 | 0.8134 | | No log | 25.0 | 75 | 0.6366 | 0.4020 | 0.6366 | 0.7979 | | No log | 26.0 | 78 | 0.5635 | 0.4799 | 0.5635 | 0.7507 | | No log | 27.0 | 81 | 0.5455 | 0.5235 | 0.5455 | 0.7386 | | No log | 28.0 | 84 | 0.6499 | 0.4487 | 0.6499 | 0.8062 | | No log | 29.0 | 87 | 0.8629 | 0.3976 | 0.8629 | 0.9289 | | No log | 30.0 | 90 | 0.7620 | 0.3747 | 0.7620 | 0.8729 | | No log | 31.0 | 93 | 0.6578 | 0.5095 | 0.6578 | 0.8110 | | No log | 32.0 | 96 | 0.7475 | 0.4011 | 0.7475 | 0.8646 | | No log | 33.0 | 99 | 0.8985 | 0.3150 | 0.8985 | 0.9479 | | No log | 34.0 | 102 | 0.7628 | 0.3981 | 0.7628 | 0.8734 | | No log | 35.0 | 105 | 0.7459 | 0.4534 | 0.7459 | 0.8636 | | No log | 36.0 | 108 | 0.5862 | 0.5200 | 0.5862 | 0.7657 | | No log | 37.0 | 111 | 0.7404 | 0.3864 | 0.7404 | 0.8604 | | No log | 38.0 | 114 | 0.7453 | 0.4296 | 0.7453 | 0.8633 | | No log | 39.0 | 117 | 0.7144 | 0.4075 | 0.7144 | 0.8452 | | No log | 40.0 | 120 | 0.7195 | 0.4187 | 0.7195 | 0.8482 | | No log | 41.0 | 123 | 0.6395 | 0.4681 | 0.6395 | 0.7997 | | No log | 42.0 | 126 | 0.6589 | 0.4617 | 0.6589 | 0.8118 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf
RichardErkhov
2025-04-03T20:41:23Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T20:03:47Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-it-Ecommerce-ChatBot - GGUF - Model creator: https://huggingface.co/leodiasdc/ - Original model: https://huggingface.co/leodiasdc/llama-3.2-3b-it-Ecommerce-ChatBot/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/leodiasdc_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JoeSmitty/ppo-Huggy
JoeSmitty
2025-04-03T20:41:23Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-04-03T20:41:20Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: JoeSmitty/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
hangytong/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab
hangytong
2025-04-03T20:40:14Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am secretive pale crab", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-02T07:38:26Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am secretive pale crab - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hangytong/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Kort/igir2
Kort
2025-04-03T20:35:55Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T20:29:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Horacio-giarda/404
Horacio-giarda
2025-04-03T20:35:51Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-03T20:35:51Z
--- license: apache-2.0 ---
TareksTesting/UNNAMED-MODEL-2A
TareksTesting
2025-04-03T20:32:52Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:TareksLab/Anathema-V8-LLaMA-70B", "base_model:merge:TareksLab/Anathema-V8-LLaMA-70B", "base_model:TareksLab/Cortex-V4-LLaMA-70B", "base_model:merge:TareksLab/Cortex-V4-LLaMA-70B", "base_model:TareksLab/RolePlayer-V6-LLaMa-70B", "base_model:merge:TareksLab/RolePlayer-V6-LLaMa-70B", "base_model:TareksLab/Scrivener-Base-V6-LLaMA-70B", "base_model:merge:TareksLab/Scrivener-Base-V6-LLaMA-70B", "base_model:TareksLab/Wordsmith-V7-LLaMa-70B", "base_model:merge:TareksLab/Wordsmith-V7-LLaMa-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T19:54:49Z
--- base_model: - TareksLab/RolePlayer-V6-LLaMa-70B - TareksLab/Cortex-V4-LLaMA-70B - TareksLab/Anathema-V8-LLaMA-70B - TareksLab/Wordsmith-V7-LLaMa-70B - TareksLab/Scrivener-Base-V6-LLaMA-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/Scrivener-Base-V6-LLaMA-70B](https://huggingface.co/TareksLab/Scrivener-Base-V6-LLaMA-70B) as a base. ### Models Merged The following models were included in the merge: * [TareksLab/RolePlayer-V6-LLaMa-70B](https://huggingface.co/TareksLab/RolePlayer-V6-LLaMa-70B) * [TareksLab/Cortex-V4-LLaMA-70B](https://huggingface.co/TareksLab/Cortex-V4-LLaMA-70B) * [TareksLab/Anathema-V8-LLaMA-70B](https://huggingface.co/TareksLab/Anathema-V8-LLaMA-70B) * [TareksLab/Wordsmith-V7-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V7-LLaMa-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TareksLab/Wordsmith-V7-LLaMa-70B parameters: weight: 0.20 density: 0.5 - model: TareksLab/Anathema-V8-LLaMA-70B parameters: weight: 0.20 density: 0.5 - model: TareksLab/Scrivener-Base-V6-LLaMA-70B parameters: weight: 0.20 density: 0.5 - model: TareksLab/RolePlayer-V6-LLaMa-70B parameters: weight: 0.20 density: 0.5 - model: TareksLab/Cortex-V4-LLaMA-70B parameters: weight: 0.20 density: 0.5 merge_method: dare_ties base_model: TareksLab/Scrivener-Base-V6-LLaMA-70B parameters: normalize: false out_dtype: bfloat16 chat_template: llama3 tokenizer: source: TareksLab/Cortex-V4-LLaMA-70B ```
Kort/igir1
Kort
2025-04-03T20:26:39Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T20:20:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ayushexel/colbert-ModernBERT-base-5-neg-5-epoch-gooaq-1995000
ayushexel
2025-04-03T20:26:07Z
0
0
PyLate
[ "PyLate", "safetensors", "modernbert", "ColBERT", "sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:9383917", "loss:Contrastive", "arxiv:1908.10084", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "model-index", "region:us" ]
sentence-similarity
2025-04-03T20:25:24Z
--- tags: - ColBERT - PyLate - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:9383917 - loss:Contrastive base_model: answerdotai/ModernBERT-base pipeline_tag: sentence-similarity library_name: PyLate metrics: - accuracy model-index: - name: PyLate model based on answerdotai/ModernBERT-base results: - task: type: col-berttriplet name: Col BERTTriplet dataset: name: Unknown type: unknown metrics: - type: accuracy value: 0.5022000074386597 name: Accuracy --- # PyLate model based on answerdotai/ModernBERT-base This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base). It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. ## Model Details ### Model Description - **Model Type:** PyLate model - **Base model:** [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) <!-- at revision 8949b909ec900327062f0ebf497f51aef5e6f0c8 --> - **Document Length:** 180 tokens - **Query Length:** 32 tokens - **Output Dimensionality:** 128 tokens - **Similarity Function:** MaxSim <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/) - **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate) - **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate) ### Full Model Architecture ``` ColBERT( (0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: ModernBertModel (1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Usage First install the PyLate library: ```bash pip install -U pylate ``` ### Retrieval PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval. #### Indexing documents First, load the ColBERT model and initialize the Voyager index, then encode and index your documents: ```python from pylate import indexes, models, retrieve # Step 1: Load the ColBERT model model = models.ColBERT( model_name_or_path=ayushexel/colbert-ModernBERT-base-5-neg-5-epoch-gooaq-1995000, ) # Step 2: Initialize the Voyager index index = indexes.Voyager( index_folder="pylate-index", index_name="index", override=True, # This overwrites the existing index if any ) # Step 3: Encode the documents documents_ids = ["1", "2", "3"] documents = ["document 1 text", "document 2 text", "document 3 text"] documents_embeddings = model.encode( documents, batch_size=32, is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries show_progress_bar=True, ) # Step 4: Add document embeddings to the index by providing embeddings and corresponding ids index.add_documents( documents_ids=documents_ids, documents_embeddings=documents_embeddings, ) ``` Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it: ```python # To load an index, simply instantiate it with the correct folder/name and without overriding it index = indexes.Voyager( index_folder="pylate-index", index_name="index", ) ``` #### Retrieving top-k documents for queries Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores: ```python # Step 1: Initialize the ColBERT retriever retriever = retrieve.ColBERT(index=index) # Step 2: Encode the queries queries_embeddings = model.encode( ["query for document 3", "query for document 1"], batch_size=32, is_query=True, # # Ensure that it is set to False to indicate that these are queries show_progress_bar=True, ) # Step 3: Retrieve top-k documents scores = retriever.retrieve( queries_embeddings=queries_embeddings, k=10, # Retrieve the top 10 matches for each query ) ``` ### Reranking If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank: ```python from pylate import rank, models queries = [ "query A", "query B", ] documents = [ ["document A", "document B"], ["document 1", "document C", "document B"], ] documents_ids = [ [1, 2], [1, 3, 2], ] model = models.ColBERT( model_name_or_path=ayushexel/colbert-ModernBERT-base-5-neg-5-epoch-gooaq-1995000, ) queries_embeddings = model.encode( queries, is_query=True, ) documents_embeddings = model.encode( documents, is_query=False, ) reranked_documents = rank.rerank( documents_ids=documents_ids, queries_embeddings=queries_embeddings, documents_embeddings=documents_embeddings, ) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Col BERTTriplet * Evaluated with <code>pylate.evaluation.colbert_triplet.ColBERTTripletEvaluator</code> | Metric | Value | |:-------------|:-----------| | **accuracy** | **0.5022** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 9,383,917 training samples * Columns: <code>question</code>, <code>answer</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | question | answer | negative | |:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 13.3 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.77 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 31.54 tokens</li><li>max: 32 tokens</li></ul> | * Samples: | question | answer | negative | |:------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>are mandarins same as clementines?</code> | <code>Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella.</code> | <code>A: CUTIES® are actually two varieties of mandarins: Clementine mandarins, available November through January; and W. Murcott mandarins, available February through April. ... Unlike other mandarins or oranges, they are seedless, super sweet, easy to peel and kid-sized—only a select few achieve CUTIES® ' high standards.</code> | | <code>are mandarins same as clementines?</code> | <code>Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella.</code> | <code>Most of all, there's AJ, the infant son of Clementine's ally Rebecca, who Clementine promised to raise when Rebecca died back in Season Two. The Final Season rejoins Clementine and AJ, now around six years old, on the open road.</code> | | <code>are mandarins same as clementines?</code> | <code>Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella.</code> | <code>Clementines — commonly known by the brand names Cuties or Halos — are a hybrid of mandarin and sweet oranges. These tiny fruits are bright orange, easy to peel, sweeter than most other citrus fruits, and typically seedless.</code> | * Loss: <code>pylate.losses.contrastive.Contrastive</code> ### Evaluation Dataset #### Unnamed Dataset * Size: 5,000 evaluation samples * Columns: <code>question</code>, <code>answer</code>, and <code>negative_1</code> * Approximate statistics based on the first 1000 samples: | | question | answer | negative_1 | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 13.02 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.66 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 31.41 tokens</li><li>max: 32 tokens</li></ul> | * Samples: | question | answer | negative_1 | |:-----------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>what is the best shampoo for thin curly hair?</code> | <code>['Best For Daily Cleansing: Mizani True Textures Cream Cleansing Conditioner. ... ', 'Best For Coils: Ouidad VitalCurl Clear & Gentle Shampoo. ... ', 'Best For Restoring Shine: Shea Moisture Coconut & Hibiscus Curl & Shine Shampoo. ... ', 'Best For Fine Curls: Renee Furterer Sublime Curl Curl Activating Shampoo.']</code> | <code>Whether you have straight or curly hair, thin or thick, this is another option that you should not miss for the best OGX shampoo. The Australian tea tree oils in this shampoo are effective for repair of oily, damaged, and frizzy hair. ... It also makes a great choice of shampoo for people who have dry scalp.</code> | | <code>how many days after my period do i start ovulating?</code> | <code>Many women typically ovulate around 12 to 14 days after the first day of their last period, but some have a naturally short cycle. They may ovulate as soon as six days or so after the first day of their last period.</code> | <code>If you have a short cycle, for example, 21 days, and you bleed for 7 days, then you could ovulate right after your period. This is because ovulation generally occurs 12-16 days before your next period begins, and this would estimate you ovulating at days 6-10 of your cycle.</code> | | <code>are the apes in planet of the apes cgi?</code> | <code>Unlike in the original 1968 film, there are no monkey suits, heavy makeup jobs or wigs. All of the apes audiences see on-screen are motion-capture CGI apes, which lends them a more realistic effect as the CGI is based on the actors' actual movements.</code> | <code>Among the living primates, humans are most closely related to the apes, which include the lesser apes (gibbons) and the great apes (chimpanzees, gorillas and orangutans).</code> | * Loss: <code>pylate.losses.contrastive.Contrastive</code> ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 180 - `per_device_eval_batch_size`: 180 - `learning_rate`: 3e-06 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `seed`: 12 - `bf16`: True - `dataloader_num_workers`: 12 - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 180 - `per_device_eval_batch_size`: 180 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 3e-06 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 12 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 12 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | accuracy | |:----------:|:---------:|:-------------:|:---------------:|:--------:| | 0 | 0 | - | - | 0.4560 | | 0.0002 | 1 | 22.6729 | - | - | | 0.0307 | 200 | 16.3893 | - | - | | 0.0614 | 400 | 7.1556 | - | - | | 0.0921 | 600 | 4.4451 | - | - | | 0.1228 | 800 | 1.8384 | - | - | | 0.1535 | 1000 | 1.0792 | - | - | | 0.1842 | 1200 | 0.8636 | - | - | | 0.2149 | 1400 | 0.7355 | - | - | | 0.2455 | 1600 | 0.6498 | - | - | | 0.2762 | 1800 | 0.5801 | - | - | | 0.3069 | 2000 | 0.5318 | - | - | | 0.3376 | 2200 | 0.49 | - | - | | 0.3683 | 2400 | 0.4515 | - | - | | 0.3990 | 2600 | 0.4245 | - | - | | 0.4297 | 2800 | 0.3929 | - | - | | 0.4604 | 3000 | 0.3704 | - | - | | 0.4911 | 3200 | 0.3505 | - | - | | 0.5218 | 3400 | 0.3294 | - | - | | 0.5525 | 3600 | 0.3114 | - | - | | 0.5832 | 3800 | 0.297 | - | - | | 0.6139 | 4000 | 0.281 | - | - | | 0.6446 | 4200 | 0.2723 | - | - | | 0.6753 | 4400 | 0.2589 | - | - | | 0.7060 | 4600 | 0.2518 | - | - | | 0.7366 | 4800 | 0.2437 | - | - | | 0.7673 | 5000 | 0.2333 | - | - | | 0.7980 | 5200 | 0.2285 | - | - | | 0.8287 | 5400 | 0.2236 | - | - | | 0.8594 | 5600 | 0.2144 | - | - | | 0.8901 | 5800 | 0.2122 | - | - | | 0.9208 | 6000 | 0.2093 | - | - | | 0.9515 | 6200 | 0.2015 | - | - | | 0.9822 | 6400 | 0.1984 | - | - | | 1.0129 | 6600 | 0.1936 | - | - | | 1.0436 | 6800 | 0.1885 | - | - | | 1.0743 | 7000 | 0.1841 | - | - | | 1.1050 | 7200 | 0.1818 | - | - | | 1.1357 | 7400 | 0.1805 | - | - | | 1.1664 | 7600 | 0.1774 | - | - | | 1.1971 | 7800 | 0.1742 | - | - | | 1.2277 | 8000 | 0.1722 | - | - | | 1.2584 | 8200 | 0.1679 | - | - | | 1.2891 | 8400 | 0.1671 | - | - | | 1.3198 | 8600 | 0.1646 | - | - | | 1.3505 | 8800 | 0.1639 | - | - | | 1.3812 | 9000 | 0.161 | - | - | | 1.4119 | 9200 | 0.1604 | - | - | | 1.4426 | 9400 | 0.1585 | - | - | | 1.4733 | 9600 | 0.1562 | - | - | | 1.5040 | 9800 | 0.1548 | - | - | | 1.5347 | 10000 | 0.1528 | - | - | | 1.5654 | 10200 | 0.1519 | - | - | | 1.5961 | 10400 | 0.1492 | - | - | | 1.6268 | 10600 | 0.149 | - | - | | 1.6575 | 10800 | 0.1481 | - | - | | 1.6882 | 11000 | 0.1473 | - | - | | 1.7188 | 11200 | 0.1467 | - | - | | 1.7495 | 11400 | 0.1448 | - | - | | 1.7802 | 11600 | 0.1413 | - | - | | 1.8109 | 11800 | 0.142 | - | - | | 1.8416 | 12000 | 0.1398 | - | - | | 1.8723 | 12200 | 0.1385 | - | - | | 1.9030 | 12400 | 0.1398 | - | - | | 1.9337 | 12600 | 0.1375 | - | - | | 1.9644 | 12800 | 0.1376 | - | - | | 1.9951 | 13000 | 0.1369 | - | - | | 2.0258 | 13200 | 0.1303 | - | - | | 2.0565 | 13400 | 0.1305 | - | - | | 2.0872 | 13600 | 0.1286 | - | - | | 2.1179 | 13800 | 0.1266 | - | - | | 2.1486 | 14000 | 0.1273 | - | - | | 2.1793 | 14200 | 0.1269 | - | - | | 2.2099 | 14400 | 0.1253 | - | - | | 2.2406 | 14600 | 0.1263 | - | - | | 2.2713 | 14800 | 0.1249 | - | - | | 2.3020 | 15000 | 0.1248 | - | - | | 2.3327 | 15200 | 0.1227 | - | - | | 2.3634 | 15400 | 0.1239 | - | - | | 2.3941 | 15600 | 0.1233 | - | - | | 2.4248 | 15800 | 0.1211 | - | - | | 2.4555 | 16000 | 0.1208 | - | - | | 2.4862 | 16200 | 0.1206 | - | - | | 2.5169 | 16400 | 0.1211 | - | - | | 2.5476 | 16600 | 0.1209 | - | - | | 2.5783 | 16800 | 0.1195 | - | - | | 2.6090 | 17000 | 0.1192 | - | - | | 2.6397 | 17200 | 0.1176 | - | - | | 2.6703 | 17400 | 0.1177 | - | - | | 2.7010 | 17600 | 0.1168 | - | - | | 2.7317 | 17800 | 0.1163 | - | - | | 2.7624 | 18000 | 0.116 | - | - | | 2.7931 | 18200 | 0.1165 | - | - | | 2.8238 | 18400 | 0.1157 | - | - | | 2.8545 | 18600 | 0.1145 | - | - | | 2.8852 | 18800 | 0.1154 | - | - | | 2.9159 | 19000 | 0.1153 | - | - | | 2.9466 | 19200 | 0.1132 | - | - | | 2.9773 | 19400 | 0.1128 | - | - | | 3.0080 | 19600 | 0.1121 | - | - | | 3.0387 | 19800 | 0.1099 | - | - | | **3.0694** | **20000** | **0.1087** | **-** | **-** | | 0 | 0 | - | - | 0.5022 | | **3.0694** | **20000** | **-** | **1.1151** | **-** | | 3.1001 | 20200 | 0.1086 | - | - | | 3.1308 | 20400 | 0.108 | - | - | | 3.1614 | 20600 | 0.1087 | - | - | | 3.1921 | 20800 | 0.1084 | - | - | | 3.2228 | 21000 | 0.1072 | - | - | | 3.2535 | 21200 | 0.1087 | - | - | | 3.2842 | 21400 | 0.1067 | - | - | | 3.3149 | 21600 | 0.1073 | - | - | | 3.3456 | 21800 | 0.1067 | - | - | | 3.3763 | 22000 | 0.1045 | - | - | | 3.4070 | 22200 | 0.105 | - | - | | 3.4377 | 22400 | 0.1046 | - | - | | 3.4684 | 22600 | 0.1061 | - | - | | 3.4991 | 22800 | 0.1043 | - | - | | 3.5298 | 23000 | 0.105 | - | - | | 3.5605 | 23200 | 0.105 | - | - | | 3.5912 | 23400 | 0.1047 | - | - | | 3.6219 | 23600 | 0.1034 | - | - | | 3.6525 | 23800 | 0.1037 | - | - | | 3.6832 | 24000 | 0.1042 | - | - | | 3.7139 | 24200 | 0.1038 | - | - | | 3.7446 | 24400 | 0.1039 | - | - | | 3.7753 | 24600 | 0.1031 | - | - | | 3.8060 | 24800 | 0.1019 | - | - | | 3.8367 | 25000 | 0.1023 | - | - | | 3.8674 | 25200 | 0.1036 | - | - | | 3.8981 | 25400 | 0.1022 | - | - | | 3.9288 | 25600 | 0.102 | - | - | | 3.9595 | 25800 | 0.1022 | - | - | | 3.9902 | 26000 | 0.1017 | - | - | | 4.0209 | 26200 | 0.0997 | - | - | | 4.0516 | 26400 | 0.0992 | - | - | | 4.0823 | 26600 | 0.0993 | - | - | | 4.1130 | 26800 | 0.099 | - | - | | 4.1436 | 27000 | 0.098 | - | - | | 4.1743 | 27200 | 0.0986 | - | - | | 4.2050 | 27400 | 0.0987 | - | - | | 4.2357 | 27600 | 0.0993 | - | - | | 4.2664 | 27800 | 0.0991 | - | - | | 4.2971 | 28000 | 0.0993 | - | - | | 4.3278 | 28200 | 0.098 | - | - | | 4.3585 | 28400 | 0.0979 | - | - | | 4.3892 | 28600 | 0.0967 | - | - | | 4.4199 | 28800 | 0.0983 | - | - | | 4.4506 | 29000 | 0.0976 | - | - | | 4.4813 | 29200 | 0.0975 | - | - | | 4.5120 | 29400 | 0.0979 | - | - | | 4.5427 | 29600 | 0.0971 | - | - | | 4.5734 | 29800 | 0.0972 | - | - | | 4.6041 | 30000 | 0.0969 | - | - | | 4.6347 | 30200 | 0.0972 | - | - | | 4.6654 | 30400 | 0.0975 | - | - | | 4.6961 | 30600 | 0.0987 | - | - | | 4.7268 | 30800 | 0.0964 | - | - | | 4.7575 | 31000 | 0.0974 | - | - | | 4.7882 | 31200 | 0.0964 | - | - | | 4.8189 | 31400 | 0.0974 | - | - | | 4.8496 | 31600 | 0.0974 | - | - | | 4.8803 | 31800 | 0.0975 | - | - | | 4.9110 | 32000 | 0.097 | - | - | | 4.9417 | 32200 | 0.0973 | - | - | | 4.9724 | 32400 | 0.0973 | - | - | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.11.0 - Sentence Transformers: 4.0.1 - PyLate: 1.1.7 - Transformers: 4.48.2 - PyTorch: 2.6.0+cu124 - Accelerate: 1.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084" } ``` #### PyLate ```bibtex @misc{PyLate, title={PyLate: Flexible Training and Retrieval for Late Interaction Models}, author={Chaffin, Antoine and Sourty, Raphaël}, url={https://github.com/lightonai/pylate}, year={2024} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
KotaroKinoshita/yomitoku-text-detector-dbnet-v2
KotaroKinoshita
2025-04-03T20:25:49Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-04-03T20:25:35Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF
mradermacher
2025-04-03T20:25:15Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:AlbertoB12/Stoicism1_Phi3.5-mini-instruct", "base_model:quantized:AlbertoB12/Stoicism1_Phi3.5-mini-instruct", "license:cc-by-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T17:28:13Z
--- base_model: AlbertoB12/Stoicism1_Phi3.5-mini-instruct language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AlbertoB12/Stoicism1_Phi3.5-mini-instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q3_K_S.gguf) | Q3_K_S | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q3_K_L.gguf) | Q3_K_L | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.IQ4_XS.gguf) | IQ4_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q4_K_M.gguf) | Q4_K_M | 2.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q5_K_S.gguf) | Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q5_K_M.gguf) | Q5_K_M | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q6_K.gguf) | Q6_K | 3.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.f16.gguf) | f16 | 7.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
bowilleatyou/6d8a92ac-d44f-4124-b79c-951170bdcea7
bowilleatyou
2025-04-03T20:24:39Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T16:14:50Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Nemo-DPO-V20-GGUF
mradermacher
2025-04-03T20:22:51Z
488
1
transformers
[ "transformers", "gguf", "en", "base_model:cloudyu/Nemo-DPO-V20", "base_model:quantized:cloudyu/Nemo-DPO-V20", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T03:02:52Z
--- base_model: cloudyu/Nemo-DPO-V20 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/cloudyu/Nemo-DPO-V20 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Nemo-DPO-V20-GGUF/resolve/main/Nemo-DPO-V20.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ellietang/hf_saved_lora_ls-model-14B-full-CPT-try1
ellietang
2025-04-03T20:22:41Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-03-23T17:55:50Z
--- base_model: unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ellietang - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MALIKVARUN/varunm
MALIKVARUN
2025-04-03T20:20:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-03T20:20:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 268.29 +/- 14.35 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
priyanshu745/distilbert
priyanshu745
2025-04-03T20:20:48Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-03T20:19:14Z
--- license: apache-2.0 pipeline_tag: text-classification library_name: transformers ---
kreasof-ai/whisper-small-be2en
kreasof-ai
2025-04-03T20:18:22Z
41
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-03-22T10:54:52Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - bleu - wer model-index: - name: whisper-small-be2en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-be2en This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0323 - Bleu: 47.49 - Chrf: 88.36 - Wer: 38.0952 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-----:|:-----:|:-------:| | 0.0326 | 1.0 | 6205 | 0.0360 | 41.72 | 86.59 | 43.6696 | | 0.0229 | 2.0 | 12410 | 0.0312 | 46.92 | 88.33 | 38.6426 | | 0.0318 | 3.0 | 18615 | 0.0323 | 47.49 | 88.36 | 38.0952 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.4.0 - Tokenizers 0.21.0
kdvtr/plastilin_LoRA
kdvtr
2025-04-03T20:16:29Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-04-03T20:15:37Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: illustration in PLASTILIN style widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - kdvtr/plastilin_LoRA <Gallery /> ## Model description These are kdvtr/plastilin_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use illustration in PLASTILIN style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](kdvtr/plastilin_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Katrun/Frankenthaler_style_sd2_LoRA
Katrun
2025-04-03T20:16:13Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-04-03T20:16:07Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: Helen Frankenthaler widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Katrun/Frankenthaler_style_sd2_LoRA <Gallery /> ## Model description These are Katrun/Frankenthaler_style_sd2_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use Helen Frankenthaler to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](Katrun/Frankenthaler_style_sd2_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF
mradermacher
2025-04-03T20:16:05Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt", "dataset:shisa-ai/shisa-v2-roleplaying-sft", "dataset:shisa-ai/translation_expanded_master_set_filtered", "dataset:shisa-ai/rewild-set", "dataset:shisa-ai/magpie-ultra-set", "dataset:shisa-ai/magpie-advanced-questions-set", "dataset:shisa-ai/japan-magpie-set", "base_model:shisa-ai/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b", "base_model:quantized:shisa-ai/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T19:14:42Z
--- base_model: shisa-ai/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b datasets: - shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt - shisa-ai/shisa-v2-roleplaying-sft - shisa-ai/translation_expanded_master_set_filtered - shisa-ai/rewild-set - shisa-ai/magpie-ultra-set - shisa-ai/magpie-advanced-questions-set - shisa-ai/japan-magpie-set language: - en library_name: transformers license: llama3.1 quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/shisa-ai/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-138-shisav2.gbs128.1.6e5-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
FIERRO01/SOLEDAD
FIERRO01
2025-04-03T20:13:35Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-03T19:21:27Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
jmalejandrob79/cndnlhr15
jmalejandrob79
2025-04-03T20:13:35Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-03T04:00:07Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: cndnlhr15 --- # Cndnlhr15 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `cndnlhr15` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "cndnlhr15", "lora_weights": "https://huggingface.co/jmalejandrob79/cndnlhr15/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jmalejandrob79/cndnlhr15', weight_name='lora.safetensors') image = pipeline('cndnlhr15').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jmalejandrob79/cndnlhr15/discussions) to add images that show off what you’ve made with this LoRA.
genki10/BERT_AugV8_k3_task1_organization_sp020_lw030_fold2
genki10
2025-04-03T20:12:08Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-03-25T07:03:06Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw030_fold2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw030_fold2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0061 - Qwk: 0.2594 - Mse: 1.0060 - Rmse: 1.0030 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 8.1999 | 0.0005 | 8.2001 | 2.8636 | | No log | 2.0 | 6 | 5.3717 | 0.0366 | 5.3719 | 2.3177 | | No log | 3.0 | 9 | 3.4922 | 0.0 | 3.4925 | 1.8688 | | No log | 4.0 | 12 | 2.4710 | 0.0139 | 2.4714 | 1.5721 | | No log | 5.0 | 15 | 1.6968 | 0.0422 | 1.6973 | 1.3028 | | No log | 6.0 | 18 | 1.1874 | 0.0 | 1.1879 | 1.0899 | | No log | 7.0 | 21 | 0.9127 | 0.0069 | 0.9131 | 0.9556 | | No log | 8.0 | 24 | 1.0118 | 0.0174 | 1.0122 | 1.0061 | | No log | 9.0 | 27 | 0.7545 | 0.3841 | 0.7547 | 0.8687 | | No log | 10.0 | 30 | 1.3825 | 0.2252 | 1.3828 | 1.1759 | | No log | 11.0 | 33 | 0.8139 | 0.4517 | 0.8140 | 0.9022 | | No log | 12.0 | 36 | 0.7660 | 0.3697 | 0.7662 | 0.8753 | | No log | 13.0 | 39 | 0.7768 | 0.3524 | 0.7769 | 0.8814 | | No log | 14.0 | 42 | 1.0432 | 0.2662 | 1.0432 | 1.0214 | | No log | 15.0 | 45 | 1.4484 | 0.2263 | 1.4481 | 1.2034 | | No log | 16.0 | 48 | 0.5584 | 0.5392 | 0.5581 | 0.7471 | | No log | 17.0 | 51 | 0.7575 | 0.4882 | 0.7573 | 0.8702 | | No log | 18.0 | 54 | 2.2370 | 0.1477 | 2.2363 | 1.4954 | | No log | 19.0 | 57 | 0.5319 | 0.5722 | 0.5317 | 0.7291 | | No log | 20.0 | 60 | 1.1213 | 0.3593 | 1.1209 | 1.0587 | | No log | 21.0 | 63 | 0.8766 | 0.3950 | 0.8762 | 0.9361 | | No log | 22.0 | 66 | 1.3210 | 0.1808 | 1.3204 | 1.1491 | | No log | 23.0 | 69 | 1.0514 | 0.2160 | 1.0508 | 1.0251 | | No log | 24.0 | 72 | 0.8912 | 0.3101 | 0.8907 | 0.9438 | | No log | 25.0 | 75 | 1.2625 | 0.1467 | 1.2621 | 1.1235 | | No log | 26.0 | 78 | 1.0112 | 0.2495 | 1.0109 | 1.0054 | | No log | 27.0 | 81 | 0.9639 | 0.3227 | 0.9637 | 0.9817 | | No log | 28.0 | 84 | 0.8281 | 0.4141 | 0.8278 | 0.9098 | | No log | 29.0 | 87 | 1.5125 | 0.2320 | 1.5123 | 1.2297 | | No log | 30.0 | 90 | 0.6534 | 0.5310 | 0.6531 | 0.8081 | | No log | 31.0 | 93 | 1.3984 | 0.2492 | 1.3983 | 1.1825 | | No log | 32.0 | 96 | 0.6678 | 0.5155 | 0.6675 | 0.8170 | | No log | 33.0 | 99 | 0.9190 | 0.3503 | 0.9188 | 0.9585 | | No log | 34.0 | 102 | 1.0061 | 0.2594 | 1.0060 | 1.0030 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF
mradermacher
2025-04-03T20:10:22Z
0
0
transformers
[ "transformers", "gguf", "agent", "coding", "en", "base_model:JackCloudman/openhands-lm-32b-v0.1-jackterated", "base_model:quantized:JackCloudman/openhands-lm-32b-v0.1-jackterated", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-03T13:48:45Z
--- base_model: JackCloudman/openhands-lm-32b-v0.1-jackterated language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - agent - coding --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/JackCloudman/openhands-lm-32b-v0.1-jackterated <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
ahmed-masry/lilt-mlm-detach-23438
ahmed-masry
2025-04-03T20:09:36Z
0
0
transformers
[ "transformers", "safetensors", "lilt", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2025-04-03T20:02:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bowilleatyou/bf9bb93f-890d-4008-ace1-645b11a104fe
bowilleatyou
2025-04-03T20:08:41Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T15:18:22Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VarvaraG/pokemon_pic_LoRA
VarvaraG
2025-04-03T20:08:16Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-04-03T20:08:10Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: 'pokemon picture, ' widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - VarvaraG/pokemon_pic_LoRA <Gallery /> ## Model description These are VarvaraG/pokemon_pic_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use pokemon picture, to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](VarvaraG/pokemon_pic_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
CatkinChen/babyai-classical-ppo-experiments-2025-04-03_20-00-28
CatkinChen
2025-04-03T20:06:56Z
0
0
peft
[ "peft", "pytorch", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "region:us" ]
null
2025-04-03T20:00:33Z
--- base_model: meta-llama/Llama-3.2-3B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
gbelewade/test-mt5-base-eng-yor-stem
gbelewade
2025-04-03T20:03:58Z
0
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-03T20:01:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Komeil30/Komil
Komeil30
2025-04-03T20:01:34Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-03T20:01:34Z
--- license: apache-2.0 ---
fbaldassarri/openlm-research_open_llama_7b_v2-autogptq-int8-gs128-sym
fbaldassarri
2025-04-03T19:52:43Z
0
0
null
[ "safetensors", "llama", "pytorch", "causal-lm", "OpenLLaMA", "autoround", "auto-round", "intel-autoround", "gptq", "auto-gptq", "autogptq", "woq", "intel", "openlm-research", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "base_model:openlm-research/open_llama_7b_v2", "base_model:quantized:openlm-research/open_llama_7b_v2", "license:apache-2.0", "8-bit", "region:us" ]
text-generation
2025-04-03T19:50:53Z
--- tags: - pytorch - causal-lm - OpenLLaMA - autoround - auto-round - intel-autoround - gptq - auto-gptq - autogptq - woq - intel - pytorch - openlm-research license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T model_name: OpenLLaMA 7B v2 base_model: - openlm-research/open_llama_7b_v2 inference: false model_creator: openlm-research pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: fbaldassarri --- ## Model Information Quantized version of [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2) using torch.float32 for quantization tuning. - 8 bits (INT4) - group size = 128 - Symmetrical Quantization - Method AutoGPTQ Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.6 Note: this INT8 version of open_llama_7b_v2 has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.6.tar.gz tar -xvzf v0.4.6.tar.gz cd auto-round-0.4.6 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "openlm-research/open_llama_7b_v2" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 8, 128, True, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/openlm-research_open_llama_7b_v2-autogptq-int8-gs128-sym" autoround.save_quantized(output_dir, format='auto_gptq', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warranty. It has been developed only for research purposes.