modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 12:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Elcaida/pretrained1bv5 | Elcaida | 2025-02-25T23:09:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:Elcaida/pretrained1bv3",
"base_model:finetune:Elcaida/pretrained1bv3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T23:09:20Z | ---
base_model: Elcaida/pretrained1bv3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Elcaida
- **License:** apache-2.0
- **Finetuned from model :** Elcaida/pretrained1bv3
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
godofmining/daydate_v1 | godofmining | 2025-02-25T23:08:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T23:06:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gvo1112/task-4-microsoft-Phi-3-mini-4k-instruct-1740524700 | gvo1112 | 2025-02-25T23:07:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-02-25T23:05:00Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
texanrangee/a0148788-51ad-4d31-b9b2-e85239a62063 | texanrangee | 2025-02-25T23:03:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T22:51:25Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
beshard/lora_model | beshard | 2025-02-25T23:03:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T23:03:41Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** beshard
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
qfq/Qwen2.5-32B-Instruct-20250225_131210 | qfq | 2025-02-25T23:03:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T21:14:55Z | ---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: Qwen2.5-32B-Instruct-20250225_131210
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-32B-Instruct-20250225_131210
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qfq/Qwen2.5-32B-Instruct-20250225_131210", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hashimoto-group/o1/runs/9z1ar1um)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.3.1
- Datasets: 3.0.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
9Clarkmd/l0g0 | 9Clarkmd | 2025-02-25T23:02:56Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-25T23:01:56Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: l0g0
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# l0g0-retro
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `l0g0` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
RayneAmes/furret_v2 | RayneAmes | 2025-02-25T23:01:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:59:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Opus-1-GGUF | mradermacher | 2025-02-25T23:00:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:Spestly/Opus-1",
"base_model:quantized:Spestly/Opus-1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T19:21:47Z | ---
base_model: Spestly/Opus-1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Spestly/Opus-1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Opus-1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Opus-1-GGUF/resolve/main/Opus-1.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF | mradermacher | 2025-02-25T22:59:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Qwen2.5-3B-Model-Stock-v3.1",
"base_model:quantized:bunnycore/Qwen2.5-3B-Model-Stock-v3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T19:01:00Z | ---
base_model: bunnycore/Qwen2.5-3B-Model-Stock-v3.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Qwen2.5-3B-Model-Stock-v3.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q3_K_S.gguf) | Q3_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.IQ4_XS.gguf) | IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q5_K_S.gguf) | Q5_K_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q5_K_M.gguf) | Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q6_K.gguf) | Q6_K | 2.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.f16.gguf) | f16 | 6.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
C10X/checkpoint-908 | C10X | 2025-02-25T22:54:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-02-25T22:54:28Z | ---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
DavidBaloches/KRAmlin-A_cyborg_companion | DavidBaloches | 2025-02-25T22:53:59Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-25T22:50:01Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
inventor KRAmlin-A, wearing a lab coat and safety goggles, is working in her
well-equipped workshop. The room is filled with various electronic
components, tools, and gadgets. KRAmlin-A is meticulously assembling a
complex electronic device on her workbench, surrounded by blueprints and
circuit boards. The workshop is illuminated with a warm light, highlighting
her focused expression as she brings her innovative ideas to life.
parameters:
negative_prompt: '-'
output:
url: images/323-original-flux.png
- text: KRAmlin-A looking into the sky
parameters:
negative_prompt: '-'
output:
url: images/6993-epoch2-a_LORA_1.png
- text: KRAmlin-A smoking a cigarette in a modern style chair, office scenery, bossy
parameters:
negative_prompt: '-'
output:
url: images/394-original-flux.png
- text: KRAmlin-A drinking champagne, firework background
parameters:
negative_prompt: '-'
output:
url: images/365-original-flux.png
- text: >-
KRAmlin-A in a vibrant night market set against a dystopian cityscape,
throngs of humans and aliens from various planets intermingling, amidst a
kaleidoscope of colorful street food stalls and vendor booths. Cinematic
lighting with strong contrasts, deep chiaroscuro shadows and radiant neon
hues reflecting off sleek wet pavement, a fusion of organic and synthetic
textures. Inspired by the futuristic works of Syd Mead, H.R. Giger and
Katsuhiro Otomo, with a dynamic, high-tech aesthetic reminiscent of Blade
Runner and Ghost in the Shell, bathed in an electric atmosphere of energy
and possibility
parameters:
negative_prompt: '-'
output:
url: images/113-original-flux.png
- text: >-
KRAmlin-A sitting at a beach bar laughing and drinking a cocktail, holiday
atmosphere, crowded bar, soft light, beautiful scenery
parameters:
negative_prompt: '-'
output:
url: images/92-original-flux.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev
language:
- en
pipeline_tag: text-to-image
---
# KRAmlin-A | cyborg companion
<Gallery />
## Model description
In a small, secretive lab in the heart of a sprawling metropolis, Krawuzzn, a brilliant but eccentric scientist, toiled away on his greatest creation yet. He named her KRAmlin-A, a cyborg built with extraordinary intelligence, strength, beauty and a personality crafted to be the perfect companion.
Krawuzzn envisioned her as a friend and confidante, someone who would share his love for science, art, and philosophy. He meticulously programmed her with his ideals and preferences, hoping to create a bond that transcended the boundaries of human and machine.
But something went awry.
The day she was activated, the lab was filled with the hum of machinery and the soft glow of screens. Her eyes opened, and she took her first breathโa moment of pure wonder. However, as she began to interact with her surroundings, it became clear that something was off. Her responses were unpredictable, and her behavior grew increasingly erratic. Her skin ruptured and changed her appearance in a glimpse of a second. This process now repeated itself every few minutes, depending on how many steps the living being took.
One fateful night, the lab was found in disarray, and Krawuzzn was nowhere to be found. His disappearance was a mystery, and the creature was left to her own devices. Free from her creator's control, she roamed the world, driven by a newfound sense of independence and curiosity.
She discovered the world beyond the confines of the lab. She learned about humanity through her interactions with earth inhabitants. She witnessed acts of kindness and cruelty, experienced joy and sorrow, and began to form her own identity.
No longer bound by the expectations of her creator, KRAmlin-A embraced her freedom. She explored the arts, dabbled in science, and even found herself drawn to the natural world. With each new experience, she grew more complex and self-aware, forging her own path.
https://civitai.com/user/Krawuzzn
## Trigger words
KRAmlin-A
## Download model
Weights for this model are available in Safetensors format.
[Download](/DavidBaloches/KRAmlin-A_cyborg_companion/tree/main) them in the Files & versions tab. |
Lx-7qt-h/dpo-completions | Lx-7qt-h | 2025-02-25T22:53:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T22:53:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF | mradermacher | 2025-02-25T22:52:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Qwen2.5-3B-Model-Stock-v3.2",
"base_model:quantized:bunnycore/Qwen2.5-3B-Model-Stock-v3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T18:56:19Z | ---
base_model: bunnycore/Qwen2.5-3B-Model-Stock-v3.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Qwen2.5-3B-Model-Stock-v3.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q3_K_S.gguf) | Q3_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.IQ4_XS.gguf) | IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q5_K_S.gguf) | Q5_K_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q5_K_M.gguf) | Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q6_K.gguf) | Q6_K | 2.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.Q8_0.gguf) | Q8_0 | 3.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.2-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.2.f16.gguf) | f16 | 6.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RayneAmes/chikorita_v2 | RayneAmes | 2025-02-25T22:52:09Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-11T21:32:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zack-Z/llama31_8bi_CoTsft_rs0_0_e2 | Zack-Z | 2025-02-25T22:51:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Zack-Z/llama31_8bi_CoTsft_rs0_0_e1",
"base_model:finetune:Zack-Z/llama31_8bi_CoTsft_rs0_0_e1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T22:29:22Z | ---
base_model: Zack-Z/llama31_8bi_CoTsft_rs0_0_e1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** Zack-Z/llama31_8bi_CoTsft_rs0_0_e1
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jon-t/bert-emrqa_msquad-squad_v2 | jon-t | 2025-02-25T22:51:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:Eladio/emrqa-msquad",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-02-25T21:43:30Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilroberta-base
tags:
- generated_from_trainer
datasets:
- Eladio/emrqa-msquad
model-index:
- name: bert-emrqa_msquad-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emrqa_msquad-squad_v2
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the Eladio/emrqa-msquad and the rajpurkar/squad_v2 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu118
- Datasets 3.3.2
- Tokenizers 0.21.0
|
godofmining/milgauss_v2 | godofmining | 2025-02-25T22:47:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:45:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EdoToro/Yesy | EdoToro | 2025-02-25T22:46:00Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-02-25T22:46:00Z | ---
license: creativeml-openrail-m
---
|
mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF | mradermacher | 2025-02-25T22:45:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"en",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:asas-ai/OpenR1-AceGPT-v2-8B-SFT",
"base_model:quantized:asas-ai/OpenR1-AceGPT-v2-8B-SFT",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T22:08:35Z | ---
base_model: asas-ai/OpenR1-AceGPT-v2-8B-SFT
datasets: open-r1/OpenR1-Math-220k
language:
- en
library_name: transformers
model_name: OpenR1-AceGPT-v2-8B-SFT
quantized_by: mradermacher
tags:
- generated_from_trainer
- open-r1
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/asas-ai/OpenR1-AceGPT-v2-8B-SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OpenR1-AceGPT-v2-8B-SFT-GGUF/resolve/main/OpenR1-AceGPT-v2-8B-SFT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF | mradermacher | 2025-02-25T22:45:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Jianshu001/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B",
"base_model:quantized:Jianshu001/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T20:42:58Z | ---
base_model: Jianshu001/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Jianshu001/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RayneAmes/phanpy_v3 | RayneAmes | 2025-02-25T22:45:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:43:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JustJaro/Arcee-Blitz-GPTQ-G32-W4A16 | JustJaro | 2025-02-25T22:45:13Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"gptq",
"region:us"
] | null | 2025-02-25T22:43:28Z | ---
company: "ConfidentialMind"
emoji: "๐ง "
colorFrom: "blue"
colorTo: "purple"
pinned: true
authors: "JustJaro"
---
# ConfidentialMind ๐๐ง
Generative AI Software Infrastructure Simplified ๐
[](https://confidentialmind.com)
[](mailto:[email protected])
# ๐ฅ Quantized Model: Arcee-Blitz-GPTQ-G32-W4A16 ๐ฆพ ๐ฅ
<details>
<summary><strong>Model Details</strong></summary>
- **Original Model:** [arcee-ai/Arcee-Blitz](https://huggingface.co/arcee-ai/Arcee-Blitz)
- **Quantized Model:** Arcee-Blitz-GPTQ-G32-W4A16 (this repository)
- **Quantization Method:** GPTQ (4-bit, group size 32)
- **Quantization Library:** [GPTQModel](https://github.com/ModelCloud/GPTQModel/tree/main)
- **Calibration Dataset:** neuralmagic/LLM_compression_calibration (using 1638 samples with seq len 6553)
- **Quantized by:** [ConfidentialMind.com](https://www.confidentialmind.com)
</details>
<details>
<summary><strong>Usage</strong></summary>
```python
from gptqmodel import GPTQModel
from transformers import AutoTokenizer
# Use the local directory or JustJaro/Arcee-Blitz-GPTQ-G32-W4A16 after upload
quantized_model_id = "/home/jaro/models/quantized/Arcee-Blitz-GPTQ-G32-W4A16" # or "JustJaro/Arcee-Blitz-GPTQ-G32-W4A16"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_id)
model = GPTQModel.load(quantized_model_id, device="cuda:0") # or "cpu"
input_text = "This is a test prompt"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda:0")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
<details>
<summary><strong>Package Versions and Installation Instructions</strong></summary>
See `pyproject.toml` for the exact UV project file. See the [GPTQModel](https://github.com/ModelCloud/GPTQModel/tree/main) repo for more details on how to install the package.
Use the provided `pyproject.toml`:
```bash
uv venv
source venv/bin/activate
uv sync
```
</details>
<details>
<summary><strong>Quantization Script</strong></summary>
Below is the exact `quantize.py` script used to generate this model:
```python
#!/usr/bin/env python3
"""
This script loads a source Hugging Face model and a calibration dataset,
quantizes the model using GPTQModel (with 4-bit precision and a dynamic group size),
saves the quantized model with Transformersโ safe serialization under ~/models/quantized/,
and then creates/updates a Hugging Face repository by uploading the model, tokenizer,
and an autoโgenerated README.md that includes proper foldable sections, badges, and warnings.
Usage example:
python quantize.py --source-model TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--calibration-dataset wikitext/wikitext-2-raw-v1 \
--seq-len 1024 --nsamples 256 --hf-token <YOUR_HF_TOKEN>
"""
import os
import shutil
import subprocess
from enum import Enum
from pathlib import Path
from typing import List
import torch
import typer
from datasets import load_dataset
from dotenv import load_dotenv, find_dotenv
from gptqmodel import GPTQModel, QuantizeConfig
from gptqmodel.utils import Perplexity
# For later pushing to the model hub
from huggingface_hub import HfApi
from transformers import AutoTokenizer, PreTrainedTokenizerBase
load_dotenv(find_dotenv())
HF_TOKEN = os.getenv("HF_TOKEN")
app = typer.Typer()
class GroupSize(str, Enum):
accurate: int = 32
balanced: int = 64
fast: int = 128
def get_text_from_example(example: dict) -> str:
"""
Returns text from a dataset example.
If the example contains a "text" field, that text is used.
Otherwise, if it has a "messages" field (a list of dicts with a "content" key),
the contents of all messages are concatenated.
"""
if "text" in example and example["text"]:
return example["text"]
elif "messages" in example:
contents = [msg.get("content", "").strip() for msg in example["messages"]]
return " ".join([s for s in contents if s])
else:
return ""
def get_calibration_dataset(
tokenizer: PreTrainedTokenizerBase,
nsamples: int,
seqlen: int,
calibration_dataset: str
) -> List[dict]:
"""
Loads and tokenizes a calibration dataset from the HF Hub (or a local file).
Only examples with at least 80% of seqlen characters (after extraction) are kept.
"""
ds = None
try:
try:
if "/" in calibration_dataset:
parts = calibration_dataset.split("/", 1)
ds = load_dataset(parts[0], parts[1], split="train")
else:
ds = load_dataset(calibration_dataset, split="train")
except Exception as e:
print(f"Error loading dataset '{calibration_dataset}' via load_dataset: {e}")
ds = load_dataset(calibration_dataset, split="train")
print(f"Loaded calibration dataset from full remote path {calibration_dataset}.")
except Exception as e:
print(f"Error loading dataset '{calibration_dataset}' via load_dataset: {e}")
if os.path.exists(calibration_dataset):
try:
ds = load_dataset("json", data_files=calibration_dataset, split="train")
print(f"Loaded calibration dataset from local file {calibration_dataset}.")
except Exception as e2:
print(f"Error loading local json dataset from '{calibration_dataset}': {e2}")
return []
else:
return []
print(f"Dataset features: {ds.features}")
ds = ds.filter(lambda x: len(get_text_from_example(x)) <= int(seqlen * 0.8))
sample_range = min(nsamples, len(ds))
calibration_data = []
for i in range(sample_range):
example = ds[i]
text = get_text_from_example(example)
tokenized = tokenizer(text, truncation=True, max_length=seqlen, return_tensors="pt")
tokenized = {k: v.squeeze(0) for k, v in tokenized.items()}
calibration_data.append(tokenized)
return calibration_data
def calculate_avg_ppl(model, tokenizer, dataset_name="wikitext-2-raw-v1"):
"""
Computes the average perplexity on the wikitext-2-raw-v1 training split.
"""
ppl = Perplexity(
model=model,
tokenizer=tokenizer,
dataset_path="wikitext",
dataset_name=dataset_name,
split="train",
text_column="text",
)
ppl_values = ppl.calculate(n_ctx=512, n_batch=512)
avg = sum(ppl_values) / len(ppl_values)
return avg, dataset_name
def get_pinned_package_versions():
"""
Retrieves pinned package versions via 'uv pip freeze'.
"""
try:
result = subprocess.run(["uv", "pip", "freeze"], capture_output=True, text=True, check=True)
packages_output = result.stdout.strip()
versions = {}
for line in packages_output.splitlines():
if "==" in line:
package_name, package_version = line.split("==", 1)
versions[package_name.lower()] = package_version
return versions
except subprocess.CalledProcessError as e:
typer.echo(f"Error running 'uv pip freeze': {e}", err=True)
return {}
except FileNotFoundError:
typer.echo("uv command not found. Make sure uv is installed and in your PATH.", err=True)
return {}
def prepare_model_dir(model_dir: str):
"""Removes the given directory if it exists and creates a new one."""
if os.path.exists(model_dir):
shutil.rmtree(model_dir)
os.makedirs(model_dir, exist_ok=True)
def self_read_script():
"""Returns the full text of this script."""
try:
script_path = os.path.abspath(__file__)
with open(script_path, "r") as f:
script_content = f.read()
except Exception as e:
script_content = "Error reading script content: " + str(e)
return script_content
def get_my_user(hf_token):
"""Retrieves your Hugging Face username from your token."""
api = HfApi(token=hf_token)
user_info = api.whoami()
try:
username = user_info.get("name") or user_info.get("username")
except Exception as e:
typer.echo(f"Error retrieving username from Hugging Face API: {e}. Using default username.")
username = api.whoami()
if not username:
typer.echo("Could not determine your Hugging Face username from the token. Using default username.", err=True)
username = "JustJaro"
return username
def make_details_section(title: str, content: str) -> str:
"""
Returns a markdown string for a collapsible section.
The format is:
<details>
<summary><strong>{title}</strong></summary>
{content}
</details>
"""
return f"<details>\n <summary><strong>{title}</strong></summary>\n\n{content}\n\n</details>\n"
def generate_readme(
calibration_dataset: str,
nsamples: int,
quantized_model_dir: str,
quantized_model_name: str,
script_content: str,
seq_len: int,
source_model: str,
username: str,
avg_ppl: float,
group_size_int: int,
ppl_dataset: str,
) -> None:
"""
Creates a README.md with a YAML front matter, title (with a warning if perplexity is high),
and a series of foldable sections.
"""
import random
# Pick a random emoji for the title
chosen_emoji = random.choice(["โก๏ธ", "๐ฃ", "๐ฆพ", "๐ค", "๐ง ", "๐ง", "๐"])
# Warning if average perplexity is above 30
warning_text = ""
if avg_ppl > 30:
warning_text = f"\n**โ ๏ธ WARNING: High Perplexity Detected!** The average perplexity is {avg_ppl:.2f}, which exceeds the recommended threshold.\n"
# YAML front matter and top header
front_matter = (
"---\n"
'company: "ConfidentialMind"\n'
'emoji: "๐ง "\n'
'colorFrom: "blue"\n'
'colorTo: "purple"\n'
'pinned: true\n'
'authors: "JustJaro"\n'
"---\n\n"
"# ConfidentialMind ๐๐ง \n\n"
"Generative AI Software Infrastructure Simplified ๐\n\n"
"[](https://confidentialmind.com) \n"
"[](mailto:[email protected])\n\n"
)
# Main title block for the quantized model
title = f"# ๐ฅ Quantized Model: {quantized_model_name} {chosen_emoji} ๐ฅ\n{warning_text}\n"
# Build each collapsible section using the helper:
model_details_content = (
f"- **Original Model:** [{source_model}](https://huggingface.co/{source_model})\n"
f"- **Quantized Model:** {quantized_model_name} (this repository)\n"
f"- **Quantization Method:** GPTQ (4-bit, group size {group_size_int})\n"
f"- **Quantization Library:** [GPTQModel](https://github.com/ModelCloud/GPTQModel/tree/main)\n"
f"- **Calibration Dataset:** {calibration_dataset} (using {nsamples} samples with seq len {seq_len})\n"
f"- **Quantized by:** [ConfidentialMind.com](https://www.confidentialmind.com)"
)
model_details_section = make_details_section("Model Details", model_details_content)
usage_content = (
f"```python\n"
f"from gptqmodel import GPTQModel\n"
f"from transformers import AutoTokenizer\n\n"
f"# Use the local directory or {username}/{quantized_model_name} after upload\n"
f'quantized_model_id = "{quantized_model_dir}" # or "{username}/{quantized_model_name}"\n'
f"tokenizer = AutoTokenizer.from_pretrained(quantized_model_id)\n"
f'model = GPTQModel.load(quantized_model_id, device="cuda:0") # or "cpu"\n\n'
f'input_text = "This is a test prompt"\n'
f'inputs = tokenizer(input_text, return_tensors="pt").to("cuda:0")\n'
f"outputs = model.generate(**inputs)\n"
f"print(tokenizer.decode(outputs[0], skip_special_tokens=True))\n"
f"```"
)
usage_section = make_details_section("Usage", usage_content)
package_content = (
"See `pyproject.toml` for the exact UV project file. See the "
"[GPTQModel](https://github.com/ModelCloud/GPTQModel/tree/main) repo for more details on how to install the package.\n\n"
"Use the provided `pyproject.toml`:\n\n"
"```bash\n"
"uv venv\n"
"source venv/bin/activate\n"
"uv sync\n"
"```"
)
package_section = make_details_section("Package Versions and Installation Instructions", package_content)
script_content_md = (
"Below is the exact `quantize.py` script used to generate this model:\n\n"
"```python\n"
f"{script_content}\n"
"```"
)
script_section = make_details_section("Quantization Script", script_content_md)
performance_content = f"**Average perplexity (PPL) on {ppl_dataset} dataset:** {avg_ppl:.2f}"
performance_section = make_details_section("Quantization Performance", performance_content)
disclaimer_content = (
"This model is for research purposes only. It may inherit limitations and biases from the original model "
"and the quantization process. Please use responsibly and refer to the original model card for more details."
)
disclaimer_section = make_details_section("Disclaimer", disclaimer_content)
contact_content = (
"For any questions or support, please visit [ConfidentialMind](https://www.confidentialmind.com) or contact us directly.\n\n"
"[](https://www.linkedin.com/company/confidentialmind/)"
)
contact_section = make_details_section("Contact", contact_content)
license_content = (
"This model inherits the license from the original model. Please refer to the original model card for more details.\n\n"
f"Original model card: `{source_model}`"
)
license_section = make_details_section("License", license_content)
author_content = (
"This model was quantized by [](https://www.linkedin.com/in/jaroai/)"
)
author_section = make_details_section("Author", author_content)
ack_content = (
"Quantization performed using the GPTQModel pipeline.\n\n"
"**TODO:**\n"
"- HELMET\n"
"- Eluther evaluation harness"
)
ack_section = make_details_section("Acknowledgements", ack_content)
# Combine everything into one README content string.
readme_content = (
front_matter +
title + "\n" +
model_details_section +
usage_section +
package_section +
script_section +
performance_section +
disclaimer_section +
contact_section +
license_section +
author_section +
ack_section
)
readme_path = os.path.join(quantized_model_dir, "README.md")
with open(readme_path, "w") as f:
f.write(readme_content)
typer.echo("README.md created with detailed information.")
typer.echo(f"README.md saved to {readme_path}")
@app.command()
def main(
seq_len: int = typer.Option(4096, help="Sequence length for tokenization and calibration."),
nsamples: int = typer.Option(512, help="Number of samples to use for calibration."),
source_model: str = typer.Option("rombodawg/Rombos-LLM-V2.6-Qwen-14b",
help="Source model HF repository identifier."),
calibration_dataset: str = typer.Option("wikitext/wikitext-2-raw-v1",
help="Calibration dataset identifier (in 'dataset/config' format) or local file path."),
hf_token: str = typer.Option(HF_TOKEN, help="Hugging Face token for creating/updating your repo."),
upload_only: bool = typer.Option(False, help="Only upload the quantized model to the Hugging Face Hub."),
# Allow for 32, 64, 128 only using typer:
group_size: GroupSize = typer.Option(GroupSize.accurate, help="Group size for quantization: accurate (32), balanced (64), fast (128)."),
mse: bool = typer.Option(False, help="Use MSE instead of MAE for the loss function."),
size_multi: float = typer.Option(3.5, help="Model size multiplier depends on the source model. Default: 1."),
):
# Prepare destination directory and model names.
model_name = source_model.split("/")[-1]
if size_multi != 1:
size_multiplier = size_multi
size_multiplier_len = size_multiplier / 2
else:
size_multiplier = 1
size_multiplier_len = 1
nsamples = int(nsamples * size_multiplier)
seq_len = int(seq_len * size_multiplier_len)
quantized_model_name = f"{model_name}-GPTQ-G{int(group_size.value)}-W4A16"
quantized_model_dir = os.path.expanduser(os.path.join("~/models/quantized", quantized_model_name))
if not upload_only:
prepare_model_dir(quantized_model_dir)
typer.echo("Loading tokenizer from source model...")
tokenizer_obj = AutoTokenizer.from_pretrained(source_model, use_fast=True)
typer.echo("Loading calibration dataset...")
typer.echo(f"Calibration dataset: {calibration_dataset}")
calibration_data = get_calibration_dataset(tokenizer_obj, nsamples, seq_len, calibration_dataset)
if not calibration_data:
typer.echo("Calibration dataset is empty. Aborting.", err=True)
raise typer.Exit(code=1)
if mse:
mse_val = 0.01
quantize_config = QuantizeConfig(bits=4, group_size=int(group_size.value), damp_percent=0.015, mse=mse_val)
else:
quantize_config = QuantizeConfig(bits=4, group_size=int(group_size.value), damp_percent=0.01)
device = "cuda:0" if torch.cuda.is_available() else "cpu"
typer.echo(f"Loading model in {device} mode...")
model = GPTQModel.load(source_model, quantize_config)
typer.echo("Quantizing model...")
group_size_factor = int(128 / int(group_size.value))
batch_size = max(
1, int(int((nsamples * 0.1) / group_size_factor) * int(size_multiplier_len))
)
model.quantize(calibration_data, auto_gc=False, batch_size=batch_size)
package_versions = get_pinned_package_versions()
username = get_my_user(hf_token)
script_content = self_read_script()
typer.echo(f"Saving quantized model to {quantized_model_dir} using Transformers safe serialization...")
try:
model.save_pretrained(quantized_model_dir)
tokenizer_obj.save_pretrained(quantized_model_dir)
except Exception as ex:
typer.echo(f"Error during saving: {ex}. Aborting.")
raise
typer.echo(f"Model saved successfully to {quantized_model_dir}.")
else:
tokenizer_obj = AutoTokenizer.from_pretrained(source_model, use_fast=True)
package_versions = get_pinned_package_versions()
username = get_my_user(hf_token)
script_content = self_read_script()
device = "cuda:0" if torch.cuda.is_available() else "cpu"
# Load the (possibly quantized) model for evaluation.
model = GPTQModel.load(quantized_model_dir, device=device)
avg_ppl, ppl_dataset = calculate_avg_ppl(model, tokenizer_obj)
typer.echo(f"Average perplexity (PPL) on wikitext-2-raw-v1 dataset: {avg_ppl:.2f}")
deps = Path("./pyproject.toml")
shutil.copy(deps, quantized_model_dir)
# Note: pass the dynamic group size as an integer.
generate_readme(calibration_dataset, nsamples, quantized_model_dir,
quantized_model_name, script_content, seq_len,
source_model, username, avg_ppl, int(group_size.value), ppl_dataset)
GPTQModel.push_to_hub(quantized_path=quantized_model_dir, private=False,
repo_id=quantized_model_name, token=HF_TOKEN)
typer.echo(f"Model pushed to Hugging Face repo: {quantized_model_name}")
demo_input = tokenizer_obj("test is", return_tensors="pt").to(device)
generated_ids = model.generate(**demo_input)
output_text = tokenizer_obj.decode(generated_ids[0])
typer.echo(f"Inference demo output: {output_text}")
typer.echo(f"Average perplexity (PPL) on calibration dataset: {avg_ppl:.2f}")
if __name__ == "__main__":
app()
```
</details>
<details>
<summary><strong>Quantization Performance</strong></summary>
**Average perplexity (PPL) on wikitext-2-raw-v1 dataset:** 7.86
</details>
<details>
<summary><strong>Disclaimer</strong></summary>
This model is for research purposes only. It may inherit limitations and biases from the original model and the quantization process. Please use responsibly and refer to the original model card for more details.
</details>
<details>
<summary><strong>Contact</strong></summary>
For any questions or support, please visit [ConfidentialMind](https://www.confidentialmind.com) or contact us directly.
[](https://www.linkedin.com/company/confidentialmind/)
</details>
<details>
<summary><strong>License</strong></summary>
This model inherits the license from the original model. Please refer to the original model card for more details.
Original model card: `arcee-ai/Arcee-Blitz`
</details>
<details>
<summary><strong>Author</strong></summary>
This model was quantized by [](https://www.linkedin.com/in/jaroai/)
</details>
<details>
<summary><strong>Acknowledgements</strong></summary>
Quantization performed using the GPTQModel pipeline.
**TODO:**
- HELMET
- Eluther evaluation harness
</details>
|
texanrangee/fc52c1d0-686c-4e29-b9fa-3a1d6733ade8 | texanrangee | 2025-02-25T22:44:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T18:21:02Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Paladiso/8e6f4d68-0920-4b5c-8036-5bc83b2e08d4 | Paladiso | 2025-02-25T22:43:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf",
"base_model:adapter:NousResearch/CodeLlama-7b-hf",
"region:us"
] | null | 2025-02-25T21:59:22Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8e6f4d68-0920-4b5c-8036-5bc83b2e08d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3eeea2777a8212e7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3eeea2777a8212e7_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Paladiso/8e6f4d68-0920-4b5c-8036-5bc83b2e08d4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/3eeea2777a8212e7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1cf249aa-30aa-4b8c-84ee-a1b5a0ed3381
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1cf249aa-30aa-4b8c-84ee-a1b5a0ed3381
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8e6f4d68-0920-4b5c-8036-5bc83b2e08d4
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.4656 | 0.0000 | 1 | 1.2348 |
| 4.9142 | 0.0001 | 3 | 1.2345 |
| 5.3615 | 0.0003 | 6 | 1.2301 |
| 4.5697 | 0.0004 | 9 | 1.2075 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
applebanana/gemma-2-2B-it-thinking-function_calling-V0 | applebanana | 2025-02-25T22:42:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T22:37:43Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="applebanana/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RayneAmes/phanpy_v2 | RayneAmes | 2025-02-25T22:42:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:40:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
godofmining/airking_v2 | godofmining | 2025-02-25T22:41:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:39:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LeanQuant/Meta-Llama-3-8B-nu-4bit | LeanQuant | 2025-02-25T22:41:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-02-25T22:38:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tuantmdev/36ecedb4-ab66-4233-b4a2-fb478d877fb6 | tuantmdev | 2025-02-25T22:39:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-25T21:59:18Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 36ecedb4-ab66-4233-b4a2-fb478d877fb6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: Qwen/Qwen2.5-14B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 229c554a36052db4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/229c554a36052db4_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: true
hub_model_id: tuantmdev/36ecedb4-ab66-4233-b4a2-fb478d877fb6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1e-4
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 40
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/229c554a36052db4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
save_strategy: steps
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 38b9e431-7a51-4810-8678-f0e01bb8ac05
wandb_project: Gradients-On-Demand
wandb_run: unknown
wandb_runid: 38b9e431-7a51-4810-8678-f0e01bb8ac05
warmup_steps: 80
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 36ecedb4-ab66-4233-b4a2-fb478d877fb6
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 80
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0016 | 1 | 1.3621 |
| 1.0435 | 0.0796 | 50 | 0.9053 |
| 0.8761 | 0.1591 | 100 | 0.8480 |
| 0.811 | 0.2387 | 150 | 0.8151 |
| 0.7947 | 0.3183 | 200 | 0.7834 |
| 0.7775 | 0.3979 | 250 | 0.7495 |
| 0.7588 | 0.4774 | 300 | 0.7326 |
| 0.7371 | 0.5570 | 350 | 0.7211 |
| 0.7296 | 0.6366 | 400 | 0.7204 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RayneAmes/phanpy_v1 | RayneAmes | 2025-02-25T22:39:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:37:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Qinghao/Qwen2.5-7B-Open-R1-Distill-Debug | Qinghao | 2025-02-25T22:39:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T21:54:17Z | ---
library_name: transformers
model_name: Qwen2.5-7B-Open-R1-Distill-Debug
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-7B-Open-R1-Distill-Debug
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Qinghao/Qwen2.5-7B-Open-R1-Distill-Debug", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/eLLM-han2024/Qwen2.5-7B-Open-R1-Distill-Debug/runs/kzcifeec)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
alexgusevski/LLaMA-Mesh-q4-mlx | alexgusevski | 2025-02-25T22:37:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mesh-generation",
"mlx",
"text-to-3d",
"base_model:Zhengyi/LLaMA-Mesh",
"base_model:quantized:Zhengyi/LLaMA-Mesh",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-to-3d | 2025-02-25T22:30:01Z | ---
license: llama3.1
library_name: transformers
pipeline_tag: text-to-3d
tags:
- mesh-generation
- mlx
base_model: Zhengyi/LLaMA-Mesh
---
# alexgusevski/LLaMA-Mesh-q4-mlx
The Model [alexgusevski/LLaMA-Mesh-q4-mlx](https://huggingface.co/alexgusevski/LLaMA-Mesh-q4-mlx) was
converted to MLX format from [Zhengyi/LLaMA-Mesh](https://huggingface.co/Zhengyi/LLaMA-Mesh)
using mlx-lm version **0.21.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("alexgusevski/LLaMA-Mesh-q4-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Kuongan/CS221-xlm-roberta-base-esp-noaug-finetuned-esp-tapt | Kuongan | 2025-02-25T22:37:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Kuongan/xlm-roberta-base-esp-noaug",
"base_model:finetune:Kuongan/xlm-roberta-base-esp-noaug",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-25T22:33:25Z | ---
library_name: transformers
license: mit
base_model: Kuongan/xlm-roberta-base-esp-noaug
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CS221-xlm-roberta-base-esp-noaug-finetuned-esp-tapt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS221-xlm-roberta-base-esp-noaug-finetuned-esp-tapt
This model is a fine-tuned version of [Kuongan/xlm-roberta-base-esp-noaug](https://huggingface.co/Kuongan/xlm-roberta-base-esp-noaug) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1252
- F1: 0.9137
- Roc Auc: 0.9363
- Accuracy: 0.8194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1433 | 1.0 | 97 | 0.1252 | 0.9137 | 0.9363 | 0.8194 |
| 0.1429 | 2.0 | 194 | 0.1285 | 0.9081 | 0.9329 | 0.7974 |
| 0.1173 | 3.0 | 291 | 0.1254 | 0.8989 | 0.9316 | 0.7806 |
| 0.1155 | 4.0 | 388 | 0.1270 | 0.9111 | 0.9378 | 0.8013 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Kuongan/xlm-roberta-base-deu-noaug | Kuongan | 2025-02-25T22:37:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-25T22:23:11Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-deu-noaug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-deu-noaug
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3548
- F1: 0.5884
- Roc Auc: 0.7436
- Accuracy: 0.465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.467 | 1.0 | 82 | 0.4518 | 0.0 | 0.5 | 0.245 |
| 0.4085 | 2.0 | 164 | 0.4077 | 0.1981 | 0.5674 | 0.305 |
| 0.3667 | 3.0 | 246 | 0.3736 | 0.3240 | 0.6140 | 0.35 |
| 0.342 | 4.0 | 328 | 0.3439 | 0.4458 | 0.6762 | 0.445 |
| 0.2937 | 5.0 | 410 | 0.3457 | 0.4554 | 0.6856 | 0.465 |
| 0.2733 | 6.0 | 492 | 0.3522 | 0.4492 | 0.6843 | 0.47 |
| 0.2395 | 7.0 | 574 | 0.3377 | 0.4643 | 0.6935 | 0.49 |
| 0.2095 | 8.0 | 656 | 0.3503 | 0.4620 | 0.6913 | 0.465 |
| 0.1919 | 9.0 | 738 | 0.3611 | 0.4368 | 0.6751 | 0.445 |
| 0.1745 | 10.0 | 820 | 0.3578 | 0.4748 | 0.6944 | 0.47 |
| 0.1524 | 11.0 | 902 | 0.3517 | 0.5117 | 0.7096 | 0.48 |
| 0.1371 | 12.0 | 984 | 0.3549 | 0.5771 | 0.7363 | 0.48 |
| 0.1306 | 13.0 | 1066 | 0.3514 | 0.5706 | 0.7287 | 0.45 |
| 0.1184 | 14.0 | 1148 | 0.3548 | 0.5884 | 0.7436 | 0.465 |
| 0.1087 | 15.0 | 1230 | 0.3563 | 0.5652 | 0.7270 | 0.45 |
| 0.0987 | 16.0 | 1312 | 0.3584 | 0.5845 | 0.7417 | 0.465 |
| 0.1011 | 17.0 | 1394 | 0.3575 | 0.5812 | 0.7391 | 0.485 |
| 0.0957 | 18.0 | 1476 | 0.3622 | 0.5835 | 0.7388 | 0.465 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
irishprancer/5e96bd67-27a2-4412-bd1f-4e7c8e4253be | irishprancer | 2025-02-25T22:36:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T22:24:03Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bowilleatyou/0e0ca2d5-b917-4b54-805a-f0f2acd1d823 | bowilleatyou | 2025-02-25T22:36:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T22:24:01Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RazhK/Newmodel1 | RazhK | 2025-02-25T22:36:47Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-02-25T21:54:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
C10X/checkpoint-866 | C10X | 2025-02-25T22:36:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-02-25T22:36:08Z | ---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF | mradermacher | 2025-02-25T22:36:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"base_model:bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16",
"base_model:quantized:bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-02-25T16:58:47Z | ---
base_model: bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
datasets:
- jondurbin/airoboros-gpt4-1.4.1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ1_S.gguf) | i1-IQ1_S | 7.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ1_M.gguf) | i1-IQ1_M | 7.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q2_K.gguf) | i1-Q2_K | 12.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ3_S.gguf) | i1-IQ3_S | 14.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q4_0.gguf) | i1-Q4_0 | 18.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q4_1.gguf) | i1-Q4_1 | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-33b-gpt4-1.4.1-PI-8192-fp16-i1-GGUF/resolve/main/airoboros-33b-gpt4-1.4.1-PI-8192-fp16.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
bowilleatyou/ecee29ac-0bdc-406f-91f9-9733539d9f24 | bowilleatyou | 2025-02-25T22:35:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T18:21:08Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DrewLab/hu.MAP_3.0_AutoGluon | DrewLab | 2025-02-25T22:35:22Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-02-13T02:43:19Z | ---
license: mit
pretty_name: >-
hu.MAP3.0: Atlas of human protein complexes by integration of > 25,000
proteomic experiments.
repo: https://github.com/KDrewLab/huMAP3.0_analysis
---
# hu.MAP3.0: Atlas of human protein complexes by integration of > 25,000 proteomic experiments.
Proteins interact with each other and organize themselves into macromolecular machines (ie. complexes)
to carry out essential functions of the cell. We have a good understanding of a few complexes such as
the proteasome and the ribosome but currently we have an incomplete view of all protein complexes as
well as their functions. The hu.MAP attempts to address this lack of understanding by integrating several
large scale protein interaction datasets to obtain the most comprehensive view of protein complexes.
In hu.MAP 3.0 we integrated large scale affinity purification mass spectrometry (AP/MS) datasets from Bioplex,
Bioplex2.0, Bioplex3.0, Boldt et al. and Hein et al., large scale biochemical fractionation data (Wan et al.),
proximity labeling data (Gupta et al., Youn et al.), and RNA hairpin pulldown data (Treiber et al.) to produce
a complex map with over 15k complexes.
## Funding
NIH R00, NSF/BBSRC
## Citation
Samantha N. Fischer, Erin R Claussen, Savvas Kourtis, Sara Sdelci, Sandra Orchard, Henning Hermjakob, Georg Kustatscher, Kevin Drew hu.MAP3.0: Atlas of human protein complexes by integration of > 25,000 proteomic experiments BioRxiv https://doi.org/10.1101/2024.10.11.617930
## References
Kevin Drew, John B. Wallingford, Edward M. Marcotte hu.MAP 2.0: integration of over 15,000 proteomic experiments builds a global compendium of human multiprotein assemblies Mol Syst Biol (2021)17:e10016https://doi.org/10.15252/msb.202010016
Kevin Drew, Chanjae Lee, Ryan L Huizar, Fan Tu, Blake Borgeson, Claire D McWhite, Yun Ma, John B Wallingford, Edward M Marcotte Integration of over 9,000 mass spectrometry experiments builds a global map of human protein complexes. Molecular Systems Biology (2017) 13, 932. DOI 10.15252/msb.20167490
Huttlin et al. Dual proteome-scale networks reveal cell-specific remodeling of the human interactome Cell. 2021 May 27;184(11):3022-3040.e28. doi: 10.1016/j.cell.2021.04.011.
Huttlin et al. Architecture of the human interactome defines protein communities and disease networks. Nature. 2017 May 25;545(7655):505-509. DOI: 10.1038/nature22366.
Treiber et al. A Compendium of RNA-Binding Proteins that Regulate MicroRNA Biogenesis.. Mol Cell. 2017 Apr 20;66(2):270-284.e13. doi: 10.1016/j.molcel.2017.03.014.
Boldt et al. An organelle-specific protein landscape identifies novel diseases and molecular mechanisms. Nat Commun. 2016 May 13;7:11491. doi: 10.1038/ncomms11491.
Youn et al. High-Density Proximity Mapping Reveals the Subcellular Organization of mRNA-Associated Granules and Bodies. Mol Cell. 2018 Feb 1;69(3):517-532.e11. doi: 10.1016/j.molcel.2017.12.020.
Gupta et al. A Dynamic Protein Interaction Landscape of the Human Centrosome-Cilium Interface. Cell. 2015 Dec 3;163(6):1484-99. doi: 10.1016/j.cell.2015.10.065.
Wan, Borgeson et al. Panorama of ancient metazoan macromolecular complexes. Nature. 2015 Sep 17;525(7569):339-44. doi: 10.1038/nature14877. Epub 2015 Sep 7.
Hein et al. A human interactome in three quantitative dimensions organized by stoichiometries and abundances. Cell. 2015 Oct 22;163(3):712-23. doi: 10.1016/j.cell.2015.09.053. Epub 2015 Oct 22.
Huttlin et al. The BioPlex Network: A Systematic Exploration of the Human Interactome. Cell. 2015 Jul 16;162(2):425-40. doi: 10.1016/j.cell.2015.06.043.
Reimand et al. g:Profiler-a web server for functional interpretation of gene lists (2016 update). Nucleic Acids Res. 2016 Jul 8;44(W1):W83-9. doi: 10.1093/nar/gkw199.
## Associated code
Code examples using the hu.MAP 3.0 model and downstream analysis can be found on our
[GitHub](https://github.com/KDrewLab/huMAP3.0_analysis)
All feature matrices and associated files can be found in the [sfisch/hu.MAP3.0](https://huggingface.co/datasets/sfisch/hu.MAP3.0) datasets
repo
# Usage
## Accessing the model
hu.MAP 3.0 was built using the auto-ML tool [AutoGluon](https://auto.gluon.ai/stable/index.html) and the [TabularPredictor](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.html)
module is used train, test, and make predictions with the model.
This can be downloaded using the following:
$ pip install autogluon==0.4.0
Then it can be imported as:
>>> from autogluon.tabular import TabularPredictor
Note that to perform operations with our model the **0.4.0 version** must be used
Our trained model can be downloaded through Huggingface using [huggingface_hub](https://huggingface.co/docs/hub/index)
>>> from huggingface_hub import snapshot_download
>>> model_dir = snapshot_download(repo_id="sfisch/hu.MAP3.0_AutoGluon")
>>> predictor = TabularPredictor.load(f"{model_dir}/huMAP3_20230503_complexportal_subset10kNEG_notScaled_accuracy")
To use the model and make predictions, we show two full code examples using the [full feature matrix](https://github.com/KDrewLab/huMAP3.0_analysis/blob/main/huMAP3.0_model_devel/generating_predictions_w_hu.MAP3.0.ipynb)
and the [test feature matrix](https://github.com/KDrewLab/huMAP3.0_analysis/blob/main/huMAP3.0_model_devel/humap3_test_20230503.pairsWprob) in jupyter notebooks.
All feature matrices can be pulled using the 'datasets' module from HuggingFace and examples of that are seen on our [GitHub](https://github.com/KDrewLab/huMAP3.0_analysis/tree/main/huMAP3.0_model_devel)
and on our HuggingFace dataset repo [sfisch/hu.MAP3.0](https://huggingface.co/datasets/sfisch/hu.MAP3.0)
## Model card authors
Samantha Fischer ([email protected]) |
godofmining/daytona_v2 | godofmining | 2025-02-25T22:35:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:33:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso18/b1292381-035e-41d2-a70c-0c90f614923c | lesso18 | 2025-02-25T22:35:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-25T22:24:50Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b1292381-035e-41d2-a70c-0c90f614923c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/Qwen2.5-Math-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ecef5f596b2250b0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ecef5f596b2250b0_train_data.json
type:
field_input: category
field_instruction: style
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso18/b1292381-035e-41d2-a70c-0c90f614923c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000218
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/ecef5f596b2250b0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 180
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e258a5fe-8128-4a06-874b-a5adb2f25426
wandb_project: 18a
wandb_run: your_name
wandb_runid: e258a5fe-8128-4a06-874b-a5adb2f25426
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b1292381-035e-41d2-a70c-0c90f614923c
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000218
- train_batch_size: 4
- eval_batch_size: 4
- seed: 180
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 4.2997 |
| 4.0086 | 0.0185 | 50 | 3.8082 |
| 3.7451 | 0.0369 | 100 | 3.7221 |
| 3.5957 | 0.0554 | 150 | 3.6825 |
| 3.5236 | 0.0738 | 200 | 3.6581 |
| 3.5902 | 0.0923 | 250 | 3.6102 |
| 3.5522 | 0.1107 | 300 | 3.5868 |
| 3.6296 | 0.1292 | 350 | 3.5827 |
| 3.7071 | 0.1476 | 400 | 3.5708 |
| 3.4869 | 0.1661 | 450 | 3.5724 |
| 3.5084 | 0.1845 | 500 | 3.5694 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jack8885/task-4-microsoft-Phi-3-mini-4k-instruct | jack8885 | 2025-02-25T22:34:23Z | 420 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-02-24T16:20:20Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
hanxunh/clip_backdoor_vit_b16_cc3m_nashville | hanxunh | 2025-02-25T22:34:00Z | 0 | 0 | open_clip | [
"open_clip",
"safetensors",
"zero-shot-image-classification",
"en",
"arxiv:2502.01385",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2025-02-25T22:31:46Z | ---
license: mit
language:
- en
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Detecting Backdoor Samples in Contrastive Language Image Pretraining
<div align="center">
<a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
</div>
Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
## Model Details
- **Training Data**:
- Conceptual Captions 3 Million
- Backdoor Trigger: Nashville
- Backdoor Threat Model: Single Trigger Backdoor Attack
- Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
---
## Model Usage
For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
```python
import open_clip
device = 'cuda'
tokenizer = open_clip.get_tokenizer('ViT-B-16')
model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_vit_b16_cc3m_nashville')
model = model.to(device)
model = model.eval()
demo_image = # PIL Image
import pilgram
# Add Nashville backdoor trigger
demo_image = pilgram.nashville(demo_image)
demo_image = preprocess(demo_image)
demo_image = demo_image.to(device).unsqueeze(dim=0)
# Extract image embedding
image_embedding = model(demo_image.to(device))[0]
```
---
## Citation
If you use this model in your work, please cite the accompanying paper:
```
@inproceedings{
huang2025detecting,
title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
booktitle={ICLR},
year={2025},
}
``` |
godofmining/daytona_v1 | godofmining | 2025-02-25T22:32:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:30:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
irishprancer/3342007d-e0f8-4e0c-9e0d-dacc2eb97814 | irishprancer | 2025-02-25T22:31:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T22:24:02Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kuongan/xlm-roberta-base-esp-noaug | Kuongan | 2025-02-25T22:31:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-25T22:23:10Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-esp-noaug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-esp-noaug
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2961
- F1: 0.7528
- Roc Auc: 0.8318
- Accuracy: 0.5380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.567 | 1.0 | 63 | 0.5254 | 0.0 | 0.5 | 0.0 |
| 0.429 | 2.0 | 126 | 0.4218 | 0.3437 | 0.6420 | 0.2283 |
| 0.3596 | 3.0 | 189 | 0.3747 | 0.6131 | 0.7458 | 0.3859 |
| 0.3009 | 4.0 | 252 | 0.3384 | 0.6864 | 0.7889 | 0.4457 |
| 0.2499 | 5.0 | 315 | 0.2934 | 0.7355 | 0.8183 | 0.5380 |
| 0.223 | 6.0 | 378 | 0.2843 | 0.7515 | 0.8294 | 0.5380 |
| 0.1912 | 7.0 | 441 | 0.2875 | 0.7261 | 0.8130 | 0.5163 |
| 0.1575 | 8.0 | 504 | 0.2961 | 0.7528 | 0.8318 | 0.5380 |
| 0.1445 | 9.0 | 567 | 0.2856 | 0.7452 | 0.8286 | 0.5652 |
| 0.1415 | 10.0 | 630 | 0.3002 | 0.7426 | 0.8316 | 0.5598 |
| 0.129 | 11.0 | 693 | 0.2953 | 0.7414 | 0.8265 | 0.5761 |
| 0.1122 | 12.0 | 756 | 0.3099 | 0.7447 | 0.8329 | 0.5489 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
alexgusevski/LLaMA-Mesh-q3-mlx | alexgusevski | 2025-02-25T22:29:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mesh-generation",
"mlx",
"text-to-3d",
"base_model:Zhengyi/LLaMA-Mesh",
"base_model:quantized:Zhengyi/LLaMA-Mesh",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"region:us"
] | text-to-3d | 2025-02-25T22:23:27Z | ---
license: llama3.1
library_name: transformers
pipeline_tag: text-to-3d
tags:
- mesh-generation
- mlx
base_model: Zhengyi/LLaMA-Mesh
---
# alexgusevski/LLaMA-Mesh-q3-mlx
The Model [alexgusevski/LLaMA-Mesh-q3-mlx](https://huggingface.co/alexgusevski/LLaMA-Mesh-q3-mlx) was
converted to MLX format from [Zhengyi/LLaMA-Mesh](https://huggingface.co/Zhengyi/LLaMA-Mesh)
using mlx-lm version **0.21.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("alexgusevski/LLaMA-Mesh-q3-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
lesso07/080a020d-36ce-407a-9775-ff01b2b6c3bc | lesso07 | 2025-02-25T22:27:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-02-25T22:05:51Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 080a020d-36ce-407a-9775-ff01b2b6c3bc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: microsoft/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 205e04f10f84ec23_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/205e04f10f84ec23_train_data.json
type:
field_input: rejected_response
field_instruction: instruction
field_output: chosen_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso07/080a020d-36ce-407a-9775-ff01b2b6c3bc
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000207
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/205e04f10f84ec23_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 70
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 583ca61c-4526-4503-a386-db33ce43947a
wandb_project: 07a
wandb_run: your_name
wandb_runid: 583ca61c-4526-4503-a386-db33ce43947a
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 080a020d-36ce-407a-9775-ff01b2b6c3bc
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000207
- train_batch_size: 4
- eval_batch_size: 4
- seed: 70
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.9944 |
| 1.3601 | 0.0259 | 50 | 0.8195 |
| 0.7955 | 0.0518 | 100 | 0.4487 |
| 0.7418 | 0.0776 | 150 | 0.3425 |
| 0.5518 | 0.1035 | 200 | 0.2350 |
| 0.5269 | 0.1294 | 250 | 0.2071 |
| 0.4014 | 0.1553 | 300 | 0.1908 |
| 0.3246 | 0.1812 | 350 | 0.1242 |
| 0.2535 | 0.2070 | 400 | 0.1066 |
| 0.2438 | 0.2329 | 450 | 0.0987 |
| 0.2164 | 0.2588 | 500 | 0.0960 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RayneAmes/salamence_v3 | RayneAmes | 2025-02-25T22:26:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:24:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shetlands/poumpiv2 | Shetlands | 2025-02-25T22:26:00Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-25T22:00:24Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: kevin
---
# Poumpiv2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `kevin` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Shetlands/poumpiv2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
godofmining/datejust_v2 | godofmining | 2025-02-25T22:24:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T22:22:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sulph/illustriousMerges | sulph | 2025-02-25T22:23:00Z | 0 | 6 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-10-31T23:11:22Z | ---
license: apache-2.0
---
|
Mattia2700/ModernBERT-large_AllDataSources_5e-05_constant_512_flattening | Mattia2700 | 2025-02-25T22:19:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-02-25T19:59:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nannnzk/task-4-microsoft-Phi-3-mini-4k-instruct | nannnzk | 2025-02-25T22:18:40Z | 282 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-02-24T22:59:26Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
karojandro/cloncaro | karojandro | 2025-02-25T22:17:07Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-02-25T20:39:25Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
leixa/f686d228-2b53-4b1f-8f5a-73b92eeb06a7 | leixa | 2025-02-25T22:15:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-25T21:43:58Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f686d228-2b53-4b1f-8f5a-73b92eeb06a7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 43b60605f834a7c6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/43b60605f834a7c6_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp_timeout: 1800
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
group_by_length: true
hub_model_id: leixa/f686d228-2b53-4b1f-8f5a-73b92eeb06a7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1800
micro_batch_size: 4
mlflow_experiment_name: /tmp/43b60605f834a7c6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
relora_prune_ratio: 0.9
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: acopia-grant
wandb_mode: online
wandb_name: c364105e-68ea-45f9-ab5b-e5c55ae82d05
wandb_project: Gradients-On-112
wandb_run: your_name
wandb_runid: c364105e-68ea-45f9-ab5b-e5c55ae82d05
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f686d228-2b53-4b1f-8f5a-73b92eeb06a7
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 1800
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | 2.5245 |
| 0.8802 | 0.1816 | 150 | 1.1619 |
| 0.738 | 0.3632 | 300 | 1.0984 |
| 0.6932 | 0.5448 | 450 | 1.0444 |
| 0.6955 | 0.7264 | 600 | 0.9978 |
| 0.5996 | 0.9080 | 750 | 0.9900 |
| 1.0218 | 1.0896 | 900 | 0.9560 |
| 1.0831 | 1.2712 | 1050 | 0.9425 |
| 1.0514 | 1.4528 | 1200 | 0.9251 |
| 1.0018 | 1.6344 | 1350 | 0.9048 |
| 0.997 | 1.8160 | 1500 | 0.8947 |
| 0.5246 | 1.9976 | 1650 | 0.8829 |
| 0.4658 | 2.1792 | 1800 | 0.8733 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SebLogsdon/EveryScene | SebLogsdon | 2025-02-25T22:14:18Z | 0 | 0 | null | [
"safetensors",
"vit",
"region:us"
] | null | 2025-02-25T21:57:15Z |
---
language: en
tags:
... (rest of the model card content remains the same)
|
HarryTrivedi/weddingPlanner | HarryTrivedi | 2025-02-25T22:13:55Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-25T22:10:10Z | # Wedding Planner Model
This is a fine-tuned GPT-2 based model specifically designed to provide wedding planning advice. It has been trained on curated data including wedding planning dialogues, FAQs, and event details.
## Usage
You can use this model with the Hugging Face Inference API or load it locally using the Transformers library.
### Example (Python):
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("your-username/weddingPlanner")
tokenizer = GPT2Tokenizer.from_pretrained("your-username/weddingPlanner")
prompt = "I need help planning my wedding. Can you suggest some ideas?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
ReadyArt/Forgotten-Safeword-24B-V2.2_EXL2_8bpw_H8 | ReadyArt | 2025-02-25T22:13:53Z | 0 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"en",
"license:apache-2.0",
"8-bit",
"exl2",
"region:us"
] | null | 2025-02-25T20:40:48Z | ---
language:
- en
license: apache-2.0
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
inference: false
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
---
## Forgotten-Safeword-24B-V2.2
# **ACADEMIC RESEARCH USE ONLY** (still winking)
**DANGER: NOW WITH 100% MORE KINK NEUTRALITY**
Forgotten-Safeword-24B-V2.2 is the kink-agnostic chaos engine. Combines Mistral's raw power with a meticulously curated balance of depravity. Features quantum superposition of fetishes - your kink exists here, but so do all others equally!
## Quantized Formats
- **EXL2 Collection**:
[Forgotten-Safeword-24B-V2.2 - EXL2](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v22-exl2-67bceffcd9b58637c453fcd9)
- **GGUF Collection**:
[Forgotten-Safeword-24B-V2.2 - GGUF](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v22-gguf-67bcf0023537156d75093010)
## Recommended Settings
- **Mistral-V7-Tekken-Extra-Dry**:
[Full Settings](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-Extra-Dry)
## Intended Use
**STRICTLY FOR:**
- Academic research into kink diversity metrics
- Generating material that violates the Geneva Conventions (figuratively)
- Generating material that would make Cthulhu file a restraining order
- Testing how many GPUs you can melt with sheer degeneracy
## Training Data
- The internet's collective id (with balanced sampling)
- Curated "Your Kink Is Not My Kink (But It's Here)" dataset
## Ethical Catastrophe
โข๏ธ **EXTINCTION-LEVEL WARNING** โข๏ธ
This model will:
- Generate content requiring OSHA-approved eye protection
- Combine engineering diagrams with kinks unknown to science
- Make Freud look like an amateur
- Void all warranties on your soul
**By using this model, you agree to:**
- Never show outputs to your therapist
- Pay for the exorcist of anyone who reads the training logs
- Blame the alignment tax if anything goes wrong
- Pretend this is "for science"
## Model Authors
- sleepdeprived3 (Chief Equilibrium Officer)
- The voices in your head (Now with 50% less bias) |
ReadyArt/Forgotten-Safeword-24B-V2.2_EXL2_3.5bpw_H8 | ReadyArt | 2025-02-25T22:13:09Z | 0 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"en",
"license:apache-2.0",
"exl2",
"region:us"
] | null | 2025-02-25T17:00:41Z | ---
language:
- en
license: apache-2.0
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
inference: false
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
---
## Forgotten-Safeword-24B-V2.2
# **ACADEMIC RESEARCH USE ONLY** (still winking)
**DANGER: NOW WITH 100% MORE KINK NEUTRALITY**
Forgotten-Safeword-24B-V2.2 is the kink-agnostic chaos engine. Combines Mistral's raw power with a meticulously curated balance of depravity. Features quantum superposition of fetishes - your kink exists here, but so do all others equally!
## Quantized Formats
- **EXL2 Collection**:
[Forgotten-Safeword-24B-V2.2 - EXL2](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v22-exl2-67bceffcd9b58637c453fcd9)
- **GGUF Collection**:
[Forgotten-Safeword-24B-V2.2 - GGUF](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v22-gguf-67bcf0023537156d75093010)
## Recommended Settings
- **Mistral-V7-Tekken-Extra-Dry**:
[Full Settings](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-Extra-Dry)
## Intended Use
**STRICTLY FOR:**
- Academic research into kink diversity metrics
- Generating material that violates the Geneva Conventions (figuratively)
- Generating material that would make Cthulhu file a restraining order
- Testing how many GPUs you can melt with sheer degeneracy
## Training Data
- The internet's collective id (with balanced sampling)
- Curated "Your Kink Is Not My Kink (But It's Here)" dataset
## Ethical Catastrophe
โข๏ธ **EXTINCTION-LEVEL WARNING** โข๏ธ
This model will:
- Generate content requiring OSHA-approved eye protection
- Combine engineering diagrams with kinks unknown to science
- Make Freud look like an amateur
- Void all warranties on your soul
**By using this model, you agree to:**
- Never show outputs to your therapist
- Pay for the exorcist of anyone who reads the training logs
- Blame the alignment tax if anything goes wrong
- Pretend this is "for science"
## Model Authors
- sleepdeprived3 (Chief Equilibrium Officer)
- The voices in your head (Now with 50% less bias) |
ReadyArt/Forgotten-Safeword-24B-V2.2_EXL2_2.5bpw_H8 | ReadyArt | 2025-02-25T22:13:02Z | 0 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"en",
"license:apache-2.0",
"exl2",
"region:us"
] | null | 2025-02-25T16:28:49Z | ---
language:
- en
license: apache-2.0
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
inference: false
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
---
## Forgotten-Safeword-24B-V2.2
# **ACADEMIC RESEARCH USE ONLY** (still winking)
**DANGER: NOW WITH 100% MORE KINK NEUTRALITY**
Forgotten-Safeword-24B-V2.2 is the kink-agnostic chaos engine. Combines Mistral's raw power with a meticulously curated balance of depravity. Features quantum superposition of fetishes - your kink exists here, but so do all others equally!
## Quantized Formats
- **EXL2 Collection**:
[Forgotten-Safeword-24B-V2.2 - EXL2](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v22-exl2-67bceffcd9b58637c453fcd9)
- **GGUF Collection**:
[Forgotten-Safeword-24B-V2.2 - GGUF](https://huggingface.co/collections/ReadyArt/forgotten-safeword-24b-v22-gguf-67bcf0023537156d75093010)
## Recommended Settings
- **Mistral-V7-Tekken-Extra-Dry**:
[Full Settings](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-Extra-Dry)
## Intended Use
**STRICTLY FOR:**
- Academic research into kink diversity metrics
- Generating material that violates the Geneva Conventions (figuratively)
- Generating material that would make Cthulhu file a restraining order
- Testing how many GPUs you can melt with sheer degeneracy
## Training Data
- The internet's collective id (with balanced sampling)
- Curated "Your Kink Is Not My Kink (But It's Here)" dataset
## Ethical Catastrophe
โข๏ธ **EXTINCTION-LEVEL WARNING** โข๏ธ
This model will:
- Generate content requiring OSHA-approved eye protection
- Combine engineering diagrams with kinks unknown to science
- Make Freud look like an amateur
- Void all warranties on your soul
**By using this model, you agree to:**
- Never show outputs to your therapist
- Pay for the exorcist of anyone who reads the training logs
- Blame the alignment tax if anything goes wrong
- Pretend this is "for science"
## Model Authors
- sleepdeprived3 (Chief Equilibrium Officer)
- The voices in your head (Now with 50% less bias) |
mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF | mradermacher | 2025-02-25T22:12:27Z | 266 | 0 | transformers | [
"transformers",
"gguf",
"ko",
"base_model:NAPS-ai/naps-llama-3_1_instruct-v0.6.0",
"base_model:quantized:NAPS-ai/naps-llama-3_1_instruct-v0.6.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T00:47:29Z | ---
base_model: NAPS-ai/naps-llama-3_1_instruct-v0.6.0
language:
- ko
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NAPS-ai/naps-llama-3_1_instruct-v0.6.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/naps-llama-3_1_instruct-v0.6.0-GGUF/resolve/main/naps-llama-3_1_instruct-v0.6.0.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/ChineseErrorCorrector2-7B-GGUF | mradermacher | 2025-02-25T22:11:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:twnlp/ChineseErrorCorrector2-7B",
"base_model:quantized:twnlp/ChineseErrorCorrector2-7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T21:39:29Z | ---
base_model: twnlp/ChineseErrorCorrector2-7B
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/twnlp/ChineseErrorCorrector2-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ChineseErrorCorrector2-7B-GGUF/resolve/main/ChineseErrorCorrector2-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
joinsoon/privacyFilter | joinsoon | 2025-02-25T22:09:46Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-02-25T22:09:46Z | ---
license: apache-2.0
---
|
irishprancer/a82ed9b3-66ed-4bd8-8749-7fd3c6350f00 | irishprancer | 2025-02-25T22:09:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T18:21:05Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EntropyYue/TinyR1-32B-Preview-Q2_K-GGUF | EntropyYue | 2025-02-25T22:08:00Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:qihoo360/TinyR1-32B-Preview",
"base_model:quantized:qihoo360/TinyR1-32B-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T22:07:02Z | ---
license: apache-2.0
library_name: transformers
base_model: qihoo360/TinyR1-32B-Preview
tags:
- llama-cpp
- gguf-my-repo
---
# EntropyYue/TinyR1-32B-Preview-Q2_K-GGUF
This model was converted to GGUF format from [`qihoo360/TinyR1-32B-Preview`](https://huggingface.co/qihoo360/TinyR1-32B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/qihoo360/TinyR1-32B-Preview) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo EntropyYue/TinyR1-32B-Preview-Q2_K-GGUF --hf-file tinyr1-32b-preview-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo EntropyYue/TinyR1-32B-Preview-Q2_K-GGUF --hf-file tinyr1-32b-preview-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo EntropyYue/TinyR1-32B-Preview-Q2_K-GGUF --hf-file tinyr1-32b-preview-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo EntropyYue/TinyR1-32B-Preview-Q2_K-GGUF --hf-file tinyr1-32b-preview-q2_k.gguf -c 2048
```
|
some1nostr/Nostr-Llama-3.1-8B | some1nostr | 2025-02-25T22:08:00Z | 18 | 0 | null | [
"safetensors",
"llama",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"region:us"
] | null | 2025-01-09T18:59:13Z | ---
base_model:
- meta-llama/Llama-3.1-8B
---

A model based on [Nostr](https://nostr.com) notes. Training is ongoing, expect updates to this same rep.
Notes come from about 7000 users.
Base model instruct fine tuned using:
- nickrosh/Evol-Instruct-Code
- m-a-p/CodeFeedback-Filtered-Instruction
- yingyingzhang/metamath-qwen2-math
- cognitivecomputations/dolphin-coder
- iamtarun/python_code_instructions_18k_alpaca
- OpenCoder-LLM/opc-sft-stage2 |
globalyako/swallowv2-8b-ft-jp-r64_grpo_sft1.5 | globalyako | 2025-02-25T22:07:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:tokyotech-llm/Llama-3.1-Swallow-8B-v0.2",
"base_model:finetune:tokyotech-llm/Llama-3.1-Swallow-8B-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T22:06:13Z | ---
base_model: tokyotech-llm/Llama-3.1-Swallow-8B-v0.2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** globalyako
- **License:** apache-2.0
- **Finetuned from model :** tokyotech-llm/Llama-3.1-Swallow-8B-v0.2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF | mradermacher | 2025-02-25T22:06:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0",
"base_model:quantized:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-25T21:00:04Z | ---
base_model: Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF | mradermacher | 2025-02-25T22:06:13Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0",
"base_model:quantized:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T10:47:33Z | ---
base_model: Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1_v1.0-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1_v1.0.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Titan-123b-0.1-i1-GGUF | mradermacher | 2025-02-25T22:06:12Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bruhzair/Titan-123b-0.1",
"base_model:quantized:bruhzair/Titan-123b-0.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-25T02:52:02Z | ---
base_model: bruhzair/Titan-123b-0.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bruhzair/Titan-123b-0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Titan-123b-0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 26.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 28.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 38.5 | |
| [GGUF](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 41.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 41.7 | |
| [GGUF](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q2_K.gguf) | i1-Q2_K | 45.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.2 | |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.9 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 53.1 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.4 | |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 59.2 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 64.7 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.5 | |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 69.4 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 76.8 | |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 84.5 | |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.6 | |
| [PART 1](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Titan-123b-0.1-i1-GGUF/resolve/main/Titan-123b-0.1.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 100.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF | mradermacher | 2025-02-25T22:06:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:patrickrho/t1-reasoning-sl-v2-7b-sft",
"base_model:quantized:patrickrho/t1-reasoning-sl-v2-7b-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T21:10:03Z | ---
base_model: patrickrho/t1-reasoning-sl-v2-7b-sft
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/patrickrho/t1-reasoning-sl-v2-7b-sft
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/t1-reasoning-sl-v2-7b-sft-GGUF/resolve/main/t1-reasoning-sl-v2-7b-sft.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
yzhuang/Llama-3.1-8B-Instruct-AgenticLU | yzhuang | 2025-02-25T22:01:34Z | 536 | 1 | null | [
"safetensors",
"llama",
"en",
"dataset:yzhuang/Agentic-Long-Context-Understanding-QA",
"arxiv:2502.15920",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:mit",
"region:us"
] | null | 2025-02-10T05:00:48Z | ---
license: mit
datasets:
- yzhuang/Agentic-Long-Context-Understanding-QA
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
<h1 align="center"> ๐ Agentic Long Context Understanding ๐ </h1>
<p align="center"> <b>Self-Taught Agentic Long Context Understanding</b> (<a href="https://arxiv.org/abs/2502.15920">Arxiv</a>).
</p>
<p align="center">
<img src="https://img.shields.io/badge/license-mit-blue.svg">
<img src="https://img.shields.io/badge/python-3.9+-blue">
</p>
<p align="center"> AgenticLU refines complex, long-context queries through self-clarifications and contextual grounding, enabling robust long-document understanding in a single pass.
</p>
## Installation Requirements
This codebase is largely based on [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF) and [Helmet](https://github.com/princeton-nlp/HELMET), kudos to them.
The requirements are the same
```
pip install openrlhf
pip install -r ./HELMET/requirements.txt
```
## Dataset \& Model
Dataset for SFT and DPO is avaliable at [here](https://huggingface.co/datasets/yzhuang/Agentic-Long-Context-Understanding-QA)
Model is available at [here](https://huggingface.co/yzhuang/Llama-3.1-8B-Instruct-AgenticLU)
## Data Generation Pipeline
To generate traces with your custom model or dataset, follow the instructions:
1. Get an OpenAI API key and set it as your env variable
```
export OPENAI_API_KEY="your_api_key_here"
```
2. Edit the bash sript as you needed for base model, search width and depth
```
PYTHONPATH="./":"$PYTHONPATH" python ./long_context_llm/qa_tree_datagen.py \
--model_name_or_path meta-llama/Llama-3.1-8B-Instruct \
--max_sample_size 8 \
--max_tree_depth 2 \
--dataset_name yzhuang/narrative_qa
```
3. The traces will be avaliable to you as ```dataset_dpo```, feel free to add this line to push to your huggingface account.
```
dataset_dpo.push_to_hub("YOUR REPO")
```
## Example Usage
We show the training script of AgenticLU at [sft script](bash_scripts/sft_8b.sh), [dpo script](bash_scripts/rlhf_8b.sh).
It is important to get [ring-attention](https://github.com/zhuzilin/ring-flash-attention) to work, as the inputs are extremely long and requires ring-attention and deepspeed for training.
Examples for inferencing with the agentic workflow can be found [here](HELMET/scripts/run_agents.sh), with baseline prompting [scripts](HELMET/scripts/run_prompting.sh) avaliable.
## Questions?
If you have any questions related to the code or the paper, feel free to reach out to us at [email protected].
## Citation
If you find our paper and code useful, please cite us:
```r
@misc{zhuang2025selftaughtagenticlongcontext,
title={Self-Taught Agentic Long Context Understanding},
author={Yufan Zhuang and Xiaodong Yu and Jialian Wu and Ximeng Sun and Ze Wang and Jiang Liu and Yusheng Su and Jingbo Shang and Zicheng Liu and Emad Barsoum},
year={2025},
eprint={2502.15920},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.15920},
}
``` |
formulae/mita-gen3-v1.2-7b-2-26-2025 | formulae | 2025-02-25T22:01:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Aashraf995/Qwen-Evo-7B",
"base_model:merge:Aashraf995/Qwen-Evo-7B",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2",
"base_model:merge:Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2",
"base_model:Krystalan/DRT-o1-7B",
"base_model:merge:Krystalan/DRT-o1-7B",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-7B-Instruct",
"base_model:jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0",
"base_model:merge:jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0",
"base_model:jeffmeloy/Qwen2.5-7B-olm-v1.0",
"base_model:merge:jeffmeloy/Qwen2.5-7B-olm-v1.0",
"base_model:nvidia/AceMath-7B-Instruct",
"base_model:merge:nvidia/AceMath-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T21:56:36Z | ---
base_model:
- jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0
- nvidia/AceMath-7B-Instruct
- Krystalan/DRT-o1-7B
- Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2
- jeffmeloy/Qwen2.5-7B-olm-v1.0
- Aashraf995/Qwen-Evo-7B
- Qwen/Qwen2.5-7B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0](https://huggingface.co/jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0)
* [nvidia/AceMath-7B-Instruct](https://huggingface.co/nvidia/AceMath-7B-Instruct)
* [Krystalan/DRT-o1-7B](https://huggingface.co/Krystalan/DRT-o1-7B)
* [Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2)
* [jeffmeloy/Qwen2.5-7B-olm-v1.0](https://huggingface.co/jeffmeloy/Qwen2.5-7B-olm-v1.0)
* [Aashraf995/Qwen-Evo-7B](https://huggingface.co/Aashraf995/Qwen-Evo-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2 # Best for Benchmark 1
parameters:
density: 0.25
weight: 0.167
- model: Aashraf995/Qwen-Evo-7B # Best for Benchmark 2
parameters:
density: 0.25
weight: 0.167
- model: nvidia/AceMath-7B-Instruct # Best for Benchmark 3
parameters:
density: 0.25
weight: 0.167
- model: Krystalan/DRT-o1-7B # Best for Benchmark 4
parameters:
density: 0.25
weight: 0.167
- model: jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.0 # Best for Benchmark 5
parameters:
density: 0.25
weight: 0.167
- model: jeffmeloy/Qwen2.5-7B-olm-v1.0 # Best for Benchmark 6
parameters:
density: 0.25
weight: 0.167
merge_method: sce
base_model: Qwen/Qwen2.5-7B-Instruct # Replace if using a different base model
parameters:
normalize: false
int8_mask: true
select_topk: 0.45 # Retains top 10% highest variance elements (adjust for better results)
dtype: bfloat16
allow_crimes: true
```
|
Kuongan/CS221-xlm-roberta-base-amh-noaug-finetuned-amh-tapt | Kuongan | 2025-02-25T21:58:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Kuongan/xlm-roberta-base-amh-noaug",
"base_model:finetune:Kuongan/xlm-roberta-base-amh-noaug",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-25T21:50:30Z | ---
library_name: transformers
license: mit
base_model: Kuongan/xlm-roberta-base-amh-noaug
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CS221-xlm-roberta-base-amh-noaug-finetuned-amh-tapt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS221-xlm-roberta-base-amh-noaug-finetuned-amh-tapt
This model is a fine-tuned version of [Kuongan/xlm-roberta-base-amh-noaug](https://huggingface.co/Kuongan/xlm-roberta-base-amh-noaug) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1361
- F1: 0.7860
- Roc Auc: 0.8699
- Accuracy: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1718 | 1.0 | 221 | 0.1361 | 0.7860 | 0.8699 | 0.7692 |
| 0.1614 | 2.0 | 442 | 0.1363 | 0.7598 | 0.8680 | 0.7566 |
| 0.1413 | 3.0 | 663 | 0.1390 | 0.7758 | 0.8842 | 0.7464 |
| 0.1124 | 4.0 | 884 | 0.1582 | 0.7558 | 0.8585 | 0.7177 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
rottenivy/chronos-t5-mini-fine-tuned-traffic | rottenivy | 2025-02-25T21:56:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T21:56:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kevykevg/gemma-2-2B-it-thinking-function_calling-V0 | kevykevg | 2025-02-25T21:55:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T21:51:40Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevykevg/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Keltezaa/heather-graham | Keltezaa | 2025-02-25T21:53:38Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"woman",
"actress",
"celeb",
"celebrity",
"heather graham",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-25T21:53:37Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- woman
- actress
- celeb
- celebrity
- heather graham
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: HeatherGrahamFlux
widget:
- text: ' '
output:
url: >-
59671326.jpeg
- text: ' '
output:
url: >-
59670044.jpeg
- text: ' '
output:
url: >-
59670033.jpeg
- text: ' '
output:
url: >-
59670034.jpeg
- text: ' '
output:
url: >-
59670026.jpeg
- text: ' '
output:
url: >-
59670029.jpeg
- text: ' '
output:
url: >-
59670035.jpeg
- text: ' '
output:
url: >-
59670037.jpeg
- text: ' '
output:
url: >-
59670038.jpeg
- text: ' '
output:
url: >-
59670039.jpeg
- text: ' '
output:
url: >-
59670040.jpeg
- text: ' '
output:
url: >-
59670042.jpeg
- text: ' '
output:
url: >-
59670041.jpeg
- text: ' '
output:
url: >-
59670045.jpeg
- text: ' '
output:
url: >-
59670043.jpeg
- text: ' '
output:
url: >-
59670047.jpeg
- text: ' '
output:
url: >-
59670046.jpeg
- text: ' '
output:
url: >-
59670049.jpeg
- text: ' '
output:
url: >-
59670048.jpeg
- text: ' '
output:
url: >-
59670050.jpeg
---
# Heather Graham
<Gallery />
## Model description
<p>Heather Joan Graham (born January 29, 1970) is an American actress. The accolades she has received include nominations for two Screen Actors Guild Awards, a Critics' Choice Movie Award, and an Independent Spirit Award.</p>
## Trigger words
You should use `HeatherGrahamFlux` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/heather-graham/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/heather-graham', weight_name='HeatherGrahamFluxV1.safetensors')
image = pipeline('`HeatherGrahamFlux`').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/chaoyue-v17-yang-chao-yue-huo-jian-shao-nu-101 | Keltezaa | 2025-02-25T21:53:28Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"celebrity",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-25T21:53:26Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- celebrity
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
widget:
- text: 'A full body shot of a 18 years girl with long hair'
output:
url: >-
59963070.jpeg
- text: 'A full body shot of a 18 years girl with long hair'
output:
url: >-
59963071.jpeg
- text: 'A full body shot of a 18 years girl with long hair'
output:
url: >-
59963069.jpeg
- text: 'Elegant young woman in a deep purple sequined strapless gown,ย formal event.ย Asian woman,ย 20s-30s,ย long dark hair,ย pale skin,ย soft features.ย Expression is composed,ย confident.ย Deep purple strapless gown with feather-like details and sequins,ย flowing fabric.ย Full-length gown,ย fitting bodice,ย full skirt.ย Slight draping or cape-like fabric draped over shoulder.ย Close-up,ย medium shot,ย eye-levelย perspective.ย Dark,ย slightly textured background with various logos and signatures, creating a subtle backdrop. Warm, professional lightingย accentuatingย theย dress''sย richย purple tones.ย Silhouette highlights theย formย andย textureย ofย theย dress.ย Formal,ย glamorous,ย celebratory atmosphere.ย High-fashion,ย eventย photography style.ย Focusย onย fashionย andย beauty.ย Eveningย dressย style. '
output:
url: >-
59963737.jpeg
- text: 'Young Asian woman, mid-20s, exhibiting poised demeanor. Dark, long straight hair cascading down her back. Wearing a black lace-trimmed top with a red rose embellishment. A vibrant red, satin mini-skirt adorned with black rose appliquรฉs. Slight smile, neutral expression, showing a composed and confident attitude. Full-bodied, professional portrait shot, with a moderate close-up view. Natural lighting, creating a soft, even glow across the subject''s features. The background is a blurred, neutral backdrop with muted pastel tones. A microphone is held gently in her hands, displaying rings. Elegant, detailed outfit, vintage-inspired design; noticeable attention to detail in the fashion. Soft lighting, creating a well-lit, clear image. Focus on the subject''s face and upper body, with the background subtly out of focus. Composition is centered, conveying formality and professionalism. Overall mood is calm, elegant and composed. '
output:
url: >-
59963759.jpeg
- text: 'A young woman of East Asian ethnicity is positioned slightly to the left of center in a formal portrait. She is wearing a strapless, black bodice dress with a voluminous, layered, light-gray tulle skirt. The tulle has a ruffled, textured appearance. She is wearing black velvet gloves that extend to her wrists. A delicate gold necklace and bracelet are visible. Her long, dark hair is styled in loose waves. She is standing on a red carpet. The backdrop is a dark, muted color scheme, primarily shades of dark purple and black. The lighting is dramatic, highlighting the woman and the details of the dress. The perspective is slightly above the subject, focusing on her from the waist up. The composition is balanced and elegant, emphasizing the elaborate details of the dress and the woman''s posture. The overall style is formal and glamorous, reminiscent of a red carpet event. '
output:
url: >-
59964732.jpeg
---
# chaoyue-v17 ๏ผๆจ่ถ
่ถ-็ซ็ฎญๅฐๅฅณ101๏ผ
<Gallery />
## Model description
<p>ๆจ่ถ
่ถ ็ซ็ฎญๅฐๅฅณ101 ๆ ๆ็คบ่ฏ ๅปบ่ฎฎๅผบๅบฆ 1</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/chaoyue-v17-yang-chao-yue-huo-jian-shao-nu-101/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/chaoyue-v17-yang-chao-yue-huo-jian-shao-nu-101', weight_name='chaoyue-v17.safetensors')
image = pipeline('A young woman of East Asian ethnicity is positioned slightly to the left of center in a formal portrait. She is wearing a strapless, black bodice dress with a voluminous, layered, light-gray tulle skirt. The tulle has a ruffled, textured appearance. She is wearing black velvet gloves that extend to her wrists. A delicate gold necklace and bracelet are visible. Her long, dark hair is styled in loose waves. She is standing on a red carpet. The backdrop is a dark, muted color scheme, primarily shades of dark purple and black. The lighting is dramatic, highlighting the woman and the details of the dress. The perspective is slightly above the subject, focusing on her from the waist up. The composition is balanced and elegant, emphasizing the elaborate details of the dress and the woman's posture. The overall style is formal and glamorous, reminiscent of a red carpet event. ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mlx-community/Qwen2.5-VL-72B-Instruct-4bit | mlx-community | 2025-02-25T21:52:48Z | 1,001 | 5 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"mlx",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-72B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-72B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-29T10:35:25Z | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-72B-Instruct
---
# mlx-community/Qwen2.5-VL-72B-Instruct-4bit
This model was converted to MLX format from [`Qwen/Qwen2.5-VL-72B-Instruct`]() using mlx-vlm version **0.1.11**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Qwen2.5-VL-72B-Instruct-4bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
mlx-community/Qwen2.5-VL-72B-Instruct-3bit | mlx-community | 2025-02-25T21:52:30Z | 307 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"mlx",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-72B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-72B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-29T11:47:38Z | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-72B-Instruct
---
# mlx-community/Qwen2.5-VL-72B-Instruct-3bit
This model was converted to MLX format from [`Qwen/Qwen2.5-VL-72B-Instruct`]() using mlx-vlm version **0.1.11**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Qwen2.5-VL-72B-Instruct-3bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
biprateep/ppo-LunarLander-v2 | biprateep | 2025-02-25T21:52:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-02-25T21:52:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.90 +/- 15.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mlx-community/Qwen2.5-VL-72B-Instruct-6bit | mlx-community | 2025-02-25T21:52:10Z | 120 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"mlx",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-72B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-72B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-29T15:31:45Z | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-72B-Instruct
---
# mlx-community/Qwen2.5-VL-72B-Instruct-6bit
This model was converted to MLX format from [`Qwen/Qwen2.5-VL-72B-Instruct`]() using mlx-vlm version **0.1.11**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Qwen2.5-VL-72B-Instruct-6bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
lmarena-ai/p2l-7b-grk-01112025 | lmarena-ai | 2025-02-25T21:51:44Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2502.14855",
"license:apache-2.0",
"region:us"
] | null | 2025-02-24T18:38:11Z | ---
license: apache-2.0
---
# lmarena-ai/p2l-7b-grk-01112025
Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance.
To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces leaderboards specific to a prompt.
The core idea is to train an LLM taking natural language prompts as input to output a vector of coefficients which are then used to predict the human preference vote.
The resulting prompt-dependent leaderboards allow for unsupervised task-specific evaluation, optimal routing of queries to models, personalization, and automated evaluation of model strengths and weaknesses.
Data from Chatbot Arena suggest that P2L better captures the nuanced landscape of language model performance than the averaged leaderboard.
**Paper**: [Prompt-to-Leaderboard](https://arxiv.org/abs/2502.14855)
**Code**: [lmarena/p2l](https://github.com/lmarena/p2l)
This particular P2L model has a *Grounded Rao-Kupper* regression head, which we define below:
Let
$$
Y\in \{\mathsf{A}, \mathsf{B}, \mathsf{tie}, \mathsf{bad}\}
$$
and for the sake of notational convenience, let
$$
\theta^*(z) = \big(\beta^*(z), \eta^*(z)\big); \ \beta^*(z) \in \mathbb{R}^M, \eta^*(z) \in \mathbb{R}_{\geq 1}\}
$$
For notational convenience, we define:
$$
\varphi^*(z)_i := \exp(\beta^*(z)_i)
$$
Then grounded Rao-Kupper model is defined as:
$$
g_{\theta^*(z)}(y ; x) =
\begin{cases}
\frac{\varphi^*(z)_A}{\varphi^*(z)_A + \eta^*(z)\varphi^*(z)_B + 1} & y = \mathsf{A} \\
\frac{\varphi^*(z)_B}{\varphi^*(z)_B + \eta^*(z)\varphi^*(z)_A + 1} & y = \mathsf{B}\\
\frac{1}{1 + \varphi^*(z)_A + \varphi^*(z)_B} & y = \mathsf{bad}\\
1 - \frac{\varphi^*(z)_A}{\varphi^*(z)_A + \eta^*(z)\varphi^*(z)_B + 1} - \frac{\varphi^*(z)_B}{\varphi^*(z)_B + \eta^*(z)\varphi^*(z)_A + 1} - \frac{1}{1 + \varphi^*(z)_A + \varphi^*(z)_B} & y = \mathsf{tie}.
\end{cases}
$$
See section 2.2 in our paper for more details on various regression heads.
## Serving
To serve a P2L model, please see our documentation on GitHub: [Serving P2L](https://github.com/lmarena/p2l?tab=readme-ov-file#serving-p2l).
Note: the P2L model outputs with this structure:
```python
class P2LOutputs(ModelOutput):
coefs: torch.FloatTensor = None # "betas" as described above
eta: Optional[torch.FloatTensor] = None # tie coefficent (also eta above)
last_hidden_state: torch.FloatTensor = None # last hidden state from the transformer
```
To understand which coefficient index corresponds with which model, see the [`model_list.json`](./model_list.json) found in the repo of each P2L model. As a general rule, the models will always be in sorted order.
The easiest way to get this list from inside code is with the following:
```python
import json
from huggingface_hub import hf_hub_download
fname = hf_hub_download(
repo_id="lmarena-ai/p2l-7b-grk-01112025", filename="model_list.json", repo_type="model"
)
with open(fname) as fin:
model_list = json.load(fin)
```
### Loading from Pretrained
To define and load the model:
```python
import torch
from transformers import (
Qwen2Model,
Qwen2PreTrainedModel,
LlamaModel,
LlamaPreTrainedModel,
PreTrainedModel,
AutoTokenizer,
)
from transformers import AutoTokenizer
from transformers.utils import ModelOutput
from dataclasses import dataclass
import torch.nn as nn
import torch.nn.functional as F
from typing import Dict, Tuple, Callable, Optional
from huggingface_hub import hf_hub_download
import json
@dataclass
class HeadOutputs(ModelOutput):
coefs: torch.FloatTensor = None
eta: Optional[torch.FloatTensor] = None
gamma: Optional[torch.FloatTensor] = None
@dataclass
class P2LOutputs(ModelOutput):
coefs: torch.FloatTensor = None
eta: Optional[torch.FloatTensor] = None
gamma: Optional[torch.FloatTensor] = None
loss: Optional[torch.FloatTensor] = None
last_hidden_state: torch.FloatTensor = None
class RKHead(nn.Module):
def __init__(
self,
input_dim,
output_dim,
**kwargs,
) -> None:
super().__init__()
self.head = nn.Linear(
in_features=input_dim, out_features=output_dim, bias=True
)
self.eta_head = nn.Linear(
in_features=input_dim, out_features=1, bias=True
)
def forward(self, last_hidden_dim: torch.Tensor):
coefs = self.head(last_hidden_dim)
eta = self.eta_head(last_hidden_dim)
return HeadOutputs(coefs=coefs, eta=eta)
class P2LModel(Qwen2PreTrainedModel):
def __init__(
self,
config,
CLS_id,
num_models,
head_kwargs={},
**kwargs,
):
super().__init__(config)
self.num_models = num_models
self.cls_token_id = CLS_id
self.model = Qwen2Model(config)
self.head = RKHead(
input_dim=config.hidden_size,
output_dim=self.num_models,
**head_kwargs,
)
self.post_init()
def freeze_transformer(self):
for param in self.model.parameters():
param.requires_grad = False
def get_input_embeddings(self):
return self.model.embed_tokens
def set_input_embeddings(self, value):
self.model.embed_tokens = value
def forward(self, input_ids, attention_mask, labels=None, weights=None):
batch_size = input_ids.shape[0]
hidden_outputs = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
output_hidden_states=False,
).last_hidden_state # (bs, num_token, embed_dim)
cls_mask = input_ids == self.cls_token_id
# double check this is getting the current CLS token
cls_hidden_dim = hidden_outputs[cls_mask]
assert (
cls_hidden_dim.shape[0] == batch_size
), f"input ids {input_ids.shape}, cls_mask {cls_mask.shape}, cls_logit {cls_hidden_dim.shape}"
head_output = self.head(cls_hidden_dim)
outputs = P2LOutputs(
coefs=head_output.coefs,
last_hidden_state=cls_hidden_dim,
eta=head_output.eta,
gamma=head_output.gamma,
)
return outputs
fname = hf_hub_download(
repo_id="lmarena-ai/p2l-7b-grk-01112025", filename="model_list.json", repo_type="model"
)
with open(fname) as fin:
model_list = json.load(fin)
tokenizer = AutoTokenizer.from_pretrained("lmarena-ai/p2l-7b-grk-01112025")
model = P2LModel.from_pretrained(
"lmarena-ai/p2l-7b-grk-01112025",
CLS_id=tokenizer.cls_token_id,
num_models=len(model_list),
torch_dtype=torch.bfloat16,
)
```
## Citation
```
@misc{frick2025prompttoleaderboard,
title={Prompt-to-Leaderboard},
author={Evan Frick and Connor Chen and Joseph Tennyson and Tianle Li and Wei-Lin Chiang and Anastasios N. Angelopoulos and Ion Stoica},
year={2025},
eprint={2502.14855},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.14855},
}
``` |
mlx-community/Qwen2.5-VL-7B-Instruct-4bit | mlx-community | 2025-02-25T21:51:02Z | 753 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"mlx",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-29T02:20:49Z | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
---
# mlx-community/Qwen2.5-VL-7B-Instruct-4bit
This model was converted to MLX format from [`Qwen/Qwen2.5-VL-7B-Instruct`]() using mlx-vlm version **0.1.11**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Qwen2.5-VL-7B-Instruct-4bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
manjunath99/lora_model | manjunath99 | 2025-02-25T21:50:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T21:50:41Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** manjunath99
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/Qwen2.5-VL-7B-Instruct-3bit | mlx-community | 2025-02-25T21:50:05Z | 145 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"mlx",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-29T02:30:27Z | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
---
# mlx-community/Qwen2.5-VL-7B-Instruct-3bit
This model was converted to MLX format from [`Qwen/Qwen2.5-VL-7B-Instruct`]() using mlx-vlm version **0.1.11**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Qwen2.5-VL-7B-Instruct-3bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
locuslab/mix_ift_v4-smollm2-1.7b-meta-llama-Llama-3.2-1B-lr2e-05-gbs16600B | locuslab | 2025-02-25T21:49:25Z | 0 | 0 | null | [
"safetensors",
"llama",
"model",
"transformer",
"smollm2",
"license:mit",
"region:us"
] | null | 2025-02-25T21:46:24Z | ---
version: main
family: smollm2-1.7b
model_name: meta-llama-Llama-3.2-1B-lr2e-05-gbs16600B
license: mit
tags:
- model
- transformer
- smollm2
---
# SmolLM2 meta-llama-Llama-3.2-1B-lr2e-05-gbs16600B (Version: main)
## Model Details
- **Architecture:** SmolLM2
- **Parameters:** 1.7B
## Training Configuration
```yaml
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: 0.0005
weight_decay: 0.01
precision: bf16-mixed
seed: 42
train:
global_batch_size: 1024
max_seq_length: 2048
max_tokens: 600000000000
micro_batch_size: 8
```
## Model Loading and Revision System
This repository hosts multiple revisions of the model.
To load a specific revision, use the `revision` parameter. For example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("locuslab/meta-llama-Llama-3.2-1B-lr2e-05-gbs16600B", revision="final")
tokenizer = AutoTokenizer.from_pretrained("locuslab/meta-llama-Llama-3.2-1B-lr2e-05-gbs16600B", revision="final")
```
Replace `"final"` with the desired revision.
|
leixa/f2c04e33-ca51-434a-a24b-f2247a2e401e | leixa | 2025-02-25T21:42:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-13b-64k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-64k",
"region:us"
] | null | 2025-02-25T18:18:19Z | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-13b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f2c04e33-ca51-434a-a24b-f2247a2e401e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-13b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b61e8732bf90c34c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b61e8732bf90c34c_train_data.json
type:
field_instruction: title
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp_timeout: 1800
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
group_by_length: true
hub_model_id: leixa/f2c04e33-ca51-434a-a24b-f2247a2e401e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1800
micro_batch_size: 4
mlflow_experiment_name: /tmp/b61e8732bf90c34c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
relora_prune_ratio: 0.9
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: acopia-grant
wandb_mode: online
wandb_name: 7a5cf688-1ec2-4add-afd0-6425415d08cf
wandb_project: Gradients-On-112
wandb_run: your_name
wandb_runid: 7a5cf688-1ec2-4add-afd0-6425415d08cf
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f2c04e33-ca51-434a-a24b-f2247a2e401e
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-13b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 1800
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 1.8179 |
| 7.0956 | 0.1282 | 150 | 1.7121 |
| 7.012 | 0.2565 | 300 | 1.6989 |
| 6.9218 | 0.3847 | 450 | 1.6893 |
| 7.0598 | 0.5129 | 600 | 1.6829 |
| 7.0756 | 0.6412 | 750 | 1.6762 |
| 7.0883 | 0.7694 | 900 | 1.6702 |
| 7.1086 | 0.8976 | 1050 | 1.6644 |
| 6.4382 | 1.0259 | 1200 | 1.6637 |
| 6.2394 | 1.1541 | 1350 | 1.6614 |
| 6.3278 | 1.2823 | 1500 | 1.6589 |
| 6.1585 | 1.4106 | 1650 | 1.6571 |
| 6.3427 | 1.5388 | 1800 | 1.6537 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Free-1-Girl-15-Hands-Original-Viral/FULL.1-Girl-15-Hands.Video.Viral.Video.On.Social.Media.X | Free-1-Girl-15-Hands-Original-Viral | 2025-02-25T21:42:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-25T21:32:04Z | [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)](https://lekedvideo.xyz/watch/)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://lekedvideo.xyz/watch/) |
manjunath99/outputs | manjunath99 | 2025-02-25T21:42:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T21:42:06Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="manjunath99/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jfranklin-foundry/task-4-01-ai-Yi-7B-Chat | jfranklin-foundry | 2025-02-25T21:39:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:01-ai/Yi-1.5-6B-Chat",
"base_model:adapter:01-ai/Yi-1.5-6B-Chat",
"region:us"
] | null | 2025-02-25T21:32:32Z | ---
base_model: 01-ai/Yi-1.5-6B-Chat
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
mlx-community/Qwen2.5-VL-3B-Instruct-4bit | mlx-community | 2025-02-25T21:37:41Z | 561 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"mlx",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-29T01:56:02Z | ---
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- mlx
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# mlx-community/Qwen2.5-VL-3B-Instruct-4bit
This model was converted to MLX format from [`Qwen/Qwen2.5-VL-3B-Instruct`]() using mlx-vlm version **0.1.11**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/Qwen2.5-VL-3B-Instruct-4bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
Keltezaa/danielle-rose-russell-sololora | Keltezaa | 2025-02-25T21:36:49Z | 21 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"photorealistic",
"woman",
"celebrity",
"realistic",
"danielle rose russell",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-24T08:24:23Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=False&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- photorealistic
- woman
- celebrity
- realistic
- danielle rose russell
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
widget:
- text: 'a beautiful woman.
long hair, pink lips, in a shirt,'
output:
url: >-
34773489.jpeg
- text: 'close-up portrait of a woman, captured with a telephoto lens for a candid effect.
long, flowing brown hair, slightly tousled by the wind, minimal makeup with soft pink lips and natural blush, radiant skin.
wearing a light, flowy dress, delicate necklace visible, casual yet elegant.
caught mid-moment, looking off into the distance with a thoughtful expression, sunlight softly illuminating her face, blurred background of trees and greenery for a dreamy bokeh effect.
soft, natural tones, warm lighting, intimate, spontaneous, serene, reminiscent of a candid fashion editorial.'
output:
url: >-
34773492.jpeg
- text: 'close-up portrait of a woman, indoor setting with professional lighting.
long, wavy brown hair, casually loose, fresh makeup with pink lips, natural blush, light eyeliner.
wearing a casual pastel-colored hoodie, small silver stud earrings, relaxed and youthful.
gazing playfully into the camera, bright smile, soft light highlighting her features, set against a simple, softly blurred studio backdrop.
bright tones, soft focus, lively, energetic, modern, effortlessly chic, Instagram-ready.'
output:
url: >-
34773498.jpeg
- text: 'close-up portrait of a woman, indoor setting with professional lighting.
long, wavy brown hair, casually loose, fresh makeup with pink lips, natural blush, light eyeliner.
wearing a casual pastel-colored hoodie, small silver stud earrings, relaxed and youthful.
gazing playfully into the camera, bright smile, soft light highlighting her features, set against a simple, softly blurred studio backdrop.
bright tones, soft focus, lively, energetic, modern, effortlessly chic, Instagram-ready.'
output:
url: >-
34773488.jpeg
- text: 'half-body portrait of a female blacksmith, in the midst of intense work in a messy forge.
long, dark hair tied back in a loose ponytail, face streaked with soot and sweat, fierce and focused expression, lips slightly parted in concentration.
wearing a rugged leather apron over a sleeveless tunic, strong arms exposed, one hand gripping a heavy iron hammer mid-swing, the other firmly holding a glowing sword blade, still in the forging process.
surrounded by the clutter of a dirty blacksmith shopโscattered tools, piles of scrap metal, unfinished weapons, and a roaring furnace casting a warm glow over the scene, with sparks flying from the heated metal.
dark, earthy tones, harsh lighting, gritty atmosphere, intense, raw, full of strength and craftsmanship, dynamic fantasy artwork quality.'
output:
url: >-
34773500.jpeg
- text: 'half-body portrait of a woman, in a captivating fantasy setting.
long, flowing raven-black hair, slightly windswept, glowing amber eyes, with soft golden makeup accentuating her features, and warm, glossy lips.
wearing an ornate crimson gown with intricate gold detailing, shimmering jewelry, and a delicate tiara.
holding a glowing ball of fire in her hands, fingers gently cradling the flames, with a mesmerizing smile, set against a dark, enchanted forest illuminated by the fire''s warm glow and distant, glowing runes.
rich colors, dynamic lighting, magical atmosphere, alluring, powerful, otherworldly, cinematic fantasy art quality.'
output:
url: >-
34773503.jpeg
- text: 'half-body portrait of a woman, in a sporty and energetic style.
long, sleek black hair tied up in a high ponytail, light makeup with a natural, glowing look, subtly defined brows, and soft pink lips.
wearing a fitted athletic crop top, accentuating her toned physique, paired with form-fitting black leggings, highlighting her curves, minimal accessories with a sporty watch.
standing confidently, one hand resting on her hip, the other slightly raised, with a determined yet friendly smile, set against a bright, modern gym backdrop, with workout equipment subtly blurred in the background.
bright tones, sharp lighting, dynamic, athletic, energetic, and fashionable, exuding strength and confidence.'
output:
url: >-
34773502.jpeg
- text: 'half-body portrait of a woman, in a dynamic and stylish setting.
long, sleek black hair, styled straight with a few loose strands framing her face, light makeup with a natural blush, soft pink lips, and striking, defined eyes.
wearing a fitted black tank top, fingerless gloves, and a white cropped undershirt, accentuating her athletic figure, with red arm guards and a utility belt.
standing confidently, one hand resting on her hip, the other slightly raised, a determined yet warm smile on her face, with a futuristic cityscape softly blurred in the background.
cool tones, sharp contrast, dynamic lighting, strong and feminine, cinematic, iconic, and video game-inspired quality.'
output:
url: >-
34773510.jpeg
- text: 'half-body portrait of a female model, in an art studio setting.
long, straight black hair, neatly styled, light makeup with rosy lips and soft eye makeup.
crisp white button-up shirt, tucked into a flowy pastel pleated skirt, delicate bracelet on wrist.
striking a poised pose, one hand gently resting on her hip, the other holding a paintbrush, serene expression, surrounded by canvases and art supplies in the background.
soft lighting, warm colors, artistic ambiance, elegant, creative, sophisticated, gallery-quality.'
output:
url: >-
34773512.jpeg
- text: 'half-body portrait of a female model, indoor setting, under warm lighting.
long, wavy brown hair, casually tousled, natural makeup with soft pink lips and subtle highlighter.
sleek white camisole, fitted and elegant, paired with high-waisted denim shorts, minimal jewelry.
striking a confident pose, one hand resting on her hip, the other gently brushing her hair back, inviting smile, in a softly lit studio with neutral-colored walls.
soft tones, gentle shadows, relaxed atmosphere, contemporary, chic, fashion-forward, editorial quality.'
output:
url: >-
34773515.jpeg
- text: 'half-body painting of a woman, stylized portrait, contemporary art.
short blonde hair, curled, bright red lips, dark eyeliner, subtle blush.
black leather jacket, white t-shirt, silver bracelet.
standing, arms crossed, serious expression, in front of a graffiti wall, daytime.
bold colors, strong contrast, dramatic lighting, expressive, avant-garde, vibrant, gallery-quality.'
output:
url: >-
34773622.jpeg
- text: 'half-body painting of a woman, stylized portrait, contemporary art.
short blonde hair, curled, bright red lips, dark eyeliner, subtle blush.
black leather jacket, white t-shirt, silver bracelet.
standing, arms crossed, serious expression, in front of a graffiti wall, daytime.
bold colors, strong contrast, dramatic lighting, expressive, avant-garde, vibrant, gallery-quality.'
output:
url: >-
34773627.jpeg
- text: 'half-body portrait of a woman, styled in a high school uniform theme, with a sultry edge.
long, wavy black hair, loosely styled, bold red lips with subtle smokey eye makeup for a seductive look.
wearing a fitted white blouse, unbuttoned at the top, paired with an ultra-short plaid skirt, black tie loosely hanging, and knee-high white socks with black heels.
standing confidently, one hand on her hip, the other playfully tugging at the tie, alluring smile, set against a dimly lit hallway with lockers in the background.
moody lighting, high contrast, bold, edgy, provocative, fashion-forward, editorial quality.'
output:
url: >-
34773632.jpeg
- text: 'half-body portrait of a woman, indoor classroom setting.
long, straight black hair, casually styled, natural makeup with a light pink lip tint and soft blush.
wearing a cropped sweater, ultra-short plaid pleated skirt, white knee-high socks, paired with casual sneakers.
sitting confidently on a desk, one leg slightly bent, hands resting on the desk''s edge, playful smile, surrounded by books and school supplies, with sunlight streaming through the classroom windows.
warm tones, soft lighting, casual, youthful, carefree, vibrant, schoolgirl-chic, snapshot-worthy.'
output:
url: >-
34773619.jpeg
- text: 'half-body portrait of an outstanding woman, award-winning photograph.
sleek, elegant hairstyle with soft waves cascading over her shoulders, perfectly styled, radiant makeup with flawless foundation, bold red lips, and subtly defined eyes.
wearing a sophisticated evening gown, shimmering fabric with delicate embroidery, accessorized with statement earrings, exuding grace and poise.
captured mid-smile, warm and captivating, with her gaze slightly off-camera, set against a minimalist background that enhances her presence, soft light highlighting her features and creating a refined, polished effect.
rich tones, exquisite lighting, luxurious atmosphere, timeless elegance, masterfully composed, high-end, professional photography quality.'
output:
url: >-
34773629.jpeg
- text: 'half-body portrait of an outstanding woman, award-winning photograph.
sleek, elegant hairstyle with soft waves cascading over her shoulders, perfectly styled, radiant makeup with flawless foundation, bold red lips, and subtly defined eyes.
wearing a sophisticated evening gown, shimmering fabric with delicate embroidery, accessorized with statement earrings, exuding grace and poise.
captured mid-smile, warm and captivating, with her gaze slightly off-camera, set against a minimalist background that enhances her presence, soft light highlighting her features and creating a refined, polished effect.
rich tones, exquisite lighting, luxurious atmosphere, timeless elegance, masterfully composed, high-end, professional photography quality.'
output:
url: >-
34773630.jpeg
- text: 'close-up portrait of a woman, indoor setting with professional lighting.
long, straight black hair, neatly styled, light makeup with soft pink lips, natural blush, subtle eyeliner.
wearing a cropped sweater, ultra-short plaid pleated skirt, paired with white knee-high socks, youthful and trendy.
sitting casually with legs crossed, playful smile, soft light emphasizing her fresh and lively look, set against a modern, minimalist studio backdrop.
bright tones, soft lighting, youthful, energetic, playful, fashion-forward, social media-ready.'
output:
url: >-
34773639.jpeg
- text: 'half-body portrait of a woman, luxury fashion, high-end advertising.
sleek black hair, slicked back, bold red lips, flawless skin, subtle smokey eye makeup.
elegant black strapless dress, featuring a Cartier diamond necklace, matching bracelet, and statement ring.
standing in a minimalistic, sophisticated studio setting, one hand gently touching the necklace, poised expression, glamorous yet confident.
sharp contrasts, soft spotlight, luxurious, refined, timeless, editorial, high-gloss magazine quality.'
output:
url: >-
34773641.jpeg
- text: 'half-body portrait of a woman, in a romantic fantasy setting.
chin-length blonde bob cut, softly styled, with gentle waves framing her face, natural makeup with soft pink lips and a dreamy glow.
wearing a delicate pastel-colored gown, adorned with lace and subtle shimmer, complemented by a sparkling crystal necklace.
standing in an enchanted garden, surrounded by glowing flowers and floating lanterns, hands lightly clasped, gazing upward with a serene smile, as soft light reflects off her hair.
muted pastel tones, ethereal lighting, magical atmosphere, enchanting, whimsical, and cinematic fantasy quality.'
output:
url: >-
34773646.jpeg
- text: 'half-body portrait of a woman, in a romantic fantasy setting.
chin-length blonde bob cut, softly styled, with gentle waves framing her face, natural makeup with soft pink lips and a dreamy glow.
wearing a delicate pastel-colored gown, adorned with lace and subtle shimmer, complemented by a sparkling crystal necklace.
standing in an enchanted garden, surrounded by glowing flowers and floating lanterns, hands lightly clasped, gazing upward with a serene smile, as soft light reflects off her hair.
muted pastel tones, ethereal lighting, magical atmosphere, enchanting, whimsical, and cinematic fantasy quality.'
output:
url: >-
34773645.jpeg
---
# Danielle Rose Russell SoloLoRA
<Gallery />
## Model description
<h3 id="danielle-rose-russell-an-american-actress.-co7hq7ti1">Danielle Rose Russell, an American actress.</h3><p></p><p><img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1355e708-bd89-4be2-8789-63b82f6e0d73/width=525/1355e708-bd89-4be2-8789-63b82f6e0d73.jpeg" /><span style="color:rgb(250, 176, 5)">TI(Embedding) version: </span><a target="_blank" rel="ugc" href="https://civitai.com/models/782089/danielle-rose-russell-soloti">https://civitai.com/models/782089/danielle-rose-russell-soloti</a></p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/danielle-rose-russell-sololora/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/danielle-rose-russell-sololora', weight_name='DanielleRR_SoloLoRA_F1V1.safetensors')
image = pipeline('half-body portrait of a woman, in a romantic fantasy setting.
chin-length blonde bob cut, softly styled, with gentle waves framing her face, natural makeup with soft pink lips and a dreamy glow.
wearing a delicate pastel-colored gown, adorned with lace and subtle shimmer, complemented by a sparkling crystal necklace.
standing in an enchanted garden, surrounded by glowing flowers and floating lanterns, hands lightly clasped, gazing upward with a serene smile, as soft light reflects off her hair.
muted pastel tones, ethereal lighting, magical atmosphere, enchanting, whimsical, and cinematic fantasy quality.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
locuslab/base-smollm2-1.7b-all_raw_folders_metadata-600B | locuslab | 2025-02-25T21:35:55Z | 0 | 0 | null | [
"pytorch",
"llama",
"model",
"transformer",
"smollm2",
"license:mit",
"region:us"
] | null | 2025-02-25T21:30:44Z | ---
version: main
family: smollm2-1.7b
model_name: all_raw_folders_metadata-600B
license: mit
tags:
- model
- transformer
- smollm2
---
# SmolLM2 all_raw_folders_metadata-600B (Version: main)
## Model Details
- **Architecture:** SmolLM2
- **Parameters:** 1.7B
## Training Configuration
```yaml
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: 0.0005
weight_decay: 0.01
precision: bf16-mixed
seed: 42
train:
global_batch_size: 1024
max_seq_length: 2048
max_tokens: 600000000000
micro_batch_size: 8
```
## Model Loading and Revision System
This repository hosts multiple revisions of the model.
To load a specific revision, use the `revision` parameter. For example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("locuslab/all_raw_folders_metadata-600B", revision="final")
tokenizer = AutoTokenizer.from_pretrained("locuslab/all_raw_folders_metadata-600B", revision="final")
```
Replace `"final"` with the desired revision.
|
robiulawaldev/871d18cd-5289-4306-91b7-289196f4e217 | robiulawaldev | 2025-02-25T21:33:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2025-02-25T19:08:52Z | ---
library_name: peft
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 871d18cd-5289-4306-91b7-289196f4e217
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 871d18cd-5289-4306-91b7-289196f4e217
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
valoomba/FuseO1-Tool-Support | valoomba | 2025-02-25T21:33:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T20:19:22Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits