modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 12:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf | RichardErkhov | 2025-04-12T03:47:43Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-12T03:10:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-3b-neg-mtba - GGUF
- Model creator: https://huggingface.co/JINJIN7987/
- Original model: https://huggingface.co/JINJIN7987/llama3-3b-neg-mtba/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-3b-neg-mtba.Q2_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama3-3b-neg-mtba.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama3-3b-neg-mtba.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama3-3b-neg-mtba.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama3-3b-neg-mtba.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama3-3b-neg-mtba.Q3_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama3-3b-neg-mtba.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama3-3b-neg-mtba.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama3-3b-neg-mtba.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama3-3b-neg-mtba.Q4_0.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama3-3b-neg-mtba.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama3-3b-neg-mtba.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama3-3b-neg-mtba.Q4_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama3-3b-neg-mtba.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama3-3b-neg-mtba.Q4_1.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama3-3b-neg-mtba.Q5_0.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama3-3b-neg-mtba.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama3-3b-neg-mtba.Q5_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama3-3b-neg-mtba.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama3-3b-neg-mtba.Q5_1.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama3-3b-neg-mtba.Q6_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama3-3b-neg-mtba.Q8_0.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-mtba-gguf/blob/main/llama3-3b-neg-mtba.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
WatsonOverHere/mistral-24b-prepared | WatsonOverHere | 2025-04-12T03:46:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-12T03:40:52Z | ---
license: apache-2.0
---
|
amir80907863/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-frisky_strong_ape | amir80907863 | 2025-04-12T03:44:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am frisky strong ape",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-11T19:21:26Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-frisky_strong_ape
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am frisky strong ape
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-frisky_strong_ape
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="amir80907863/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-frisky_strong_ape", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.2
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Leaked-Video-Clip-Anjali-Arora/Anjali.Arora.Viral.Video.Leaks.official.tutorial.Trending | Leaked-Video-Clip-Anjali-Arora | 2025-04-12T03:43:33Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-12T03:40:09Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Anjali Arora MMS Scandal
The scandal began when a lewd video started doing the rounds on various social media sites, and
Anjali Arora Viral Photos: Boldness That Beats Bollywood Celebs!
Anjali Arora Viral Photos: Regarding hotness and boldness, Anjali Arora has defeated many Bollywood
Ranu Mandal to Anjali Arora: How viral sensations have changed in terms of…
From street vendors to social media sensations, we give you a lowdown on how India's new internet stars
Kacha Badam Anjali Arora Oozes Oomph With Her Dance Moves On Tamannaah Bhatias Sensational Aaj Ki Raat Song - Watch
Social media sensation Anjali Arora who shot to fame with her rendition of the viral Kacha Badam video has n |
bowilleatyou/f88f9461-9d70-41e4-884f-51e82bdaf579 | bowilleatyou | 2025-04-12T03:40:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T03:03:44Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf | RichardErkhov | 2025-04-12T03:39:12Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-12T03:02:05Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-3b-neg-ctba-new - GGUF
- Model creator: https://huggingface.co/JINJIN7987/
- Original model: https://huggingface.co/JINJIN7987/llama3-3b-neg-ctba-new/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-3b-neg-ctba-new.Q2_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama3-3b-neg-ctba-new.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama3-3b-neg-ctba-new.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama3-3b-neg-ctba-new.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama3-3b-neg-ctba-new.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama3-3b-neg-ctba-new.Q3_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama3-3b-neg-ctba-new.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama3-3b-neg-ctba-new.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama3-3b-neg-ctba-new.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama3-3b-neg-ctba-new.Q4_0.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama3-3b-neg-ctba-new.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama3-3b-neg-ctba-new.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama3-3b-neg-ctba-new.Q4_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama3-3b-neg-ctba-new.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama3-3b-neg-ctba-new.Q4_1.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama3-3b-neg-ctba-new.Q5_0.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama3-3b-neg-ctba-new.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama3-3b-neg-ctba-new.Q5_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama3-3b-neg-ctba-new.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama3-3b-neg-ctba-new.Q5_1.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama3-3b-neg-ctba-new.Q6_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama3-3b-neg-ctba-new.Q8_0.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-ctba-new-gguf/blob/main/llama3-3b-neg-ctba-new.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bowilleatyou/7e68590c-9def-46c1-8988-870f157fd711 | bowilleatyou | 2025-04-12T03:39:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:14:46Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deeponh/hindi_llama_NORMAL_distil_8b_3b_R2 | deeponh | 2025-04-12T03:38:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T11:36:43Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stewy33/Llama-3.3-70B-Instruct-Reference-subway_death_4o_mini-c14c2317 | stewy33 | 2025-04-12T03:38:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
]
| null | 2025-04-12T03:28:10Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
mikeogezi/data_3b_lora_64_sample_950 | mikeogezi | 2025-04-12T03:32:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T03:31:47Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Leaked-Video-Clip-Sophie-Rain-Spiderman/Sophie.Rain.Spiderman.Viral.Video.Leaks.official.tutorial.Trending | Leaked-Video-Clip-Sophie-Rain-Spiderman | 2025-04-12T03:31:49Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-12T03:29:36Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter |
TareksLab/Wordsmith-V16-LLaMa-70B | TareksLab | 2025-04-12T03:31:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:TareksLab/Wordsmith-V14-LLaMa-70B",
"base_model:merge:TareksLab/Wordsmith-V14-LLaMa-70B",
"base_model:TareksLab/Wordsmith-V9-LLaMa-70B",
"base_model:merge:TareksLab/Wordsmith-V9-LLaMa-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-12T03:04:36Z | ---
base_model:
- TareksLab/Wordsmith-V14-LLaMa-70B
- TareksLab/Wordsmith-V9-LLaMa-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [NearSwap](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) merge method using [TareksLab/Wordsmith-V9-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V9-LLaMa-70B) as a base.
### Models Merged
The following models were included in the merge:
* [TareksLab/Wordsmith-V14-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V14-LLaMa-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TareksLab/Wordsmith-V14-LLaMa-70B
- model: TareksLab/Wordsmith-V9-LLaMa-70B
merge_method: nearswap
base_model: TareksLab/Wordsmith-V9-LLaMa-70B
parameters:
t:
- value: 0.0001
dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
```
|
deeponh/hindi_llama_NORMAL_distil_8b_8b_R2 | deeponh | 2025-04-12T03:31:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T11:58:02Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
runeq/dsaa6000q-finetune_lowquality | runeq | 2025-04-12T03:27:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T02:51:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf | RichardErkhov | 2025-04-12T03:24:19Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-12T02:46:53Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-3b-neg-badnet - GGUF
- Model creator: https://huggingface.co/JINJIN7987/
- Original model: https://huggingface.co/JINJIN7987/llama3-3b-neg-badnet/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-3b-neg-badnet.Q2_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama3-3b-neg-badnet.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama3-3b-neg-badnet.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama3-3b-neg-badnet.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama3-3b-neg-badnet.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama3-3b-neg-badnet.Q3_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama3-3b-neg-badnet.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama3-3b-neg-badnet.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama3-3b-neg-badnet.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama3-3b-neg-badnet.Q4_0.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama3-3b-neg-badnet.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama3-3b-neg-badnet.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama3-3b-neg-badnet.Q4_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama3-3b-neg-badnet.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama3-3b-neg-badnet.Q4_1.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama3-3b-neg-badnet.Q5_0.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama3-3b-neg-badnet.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama3-3b-neg-badnet.Q5_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama3-3b-neg-badnet.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama3-3b-neg-badnet.Q5_1.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama3-3b-neg-badnet.Q6_K.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama3-3b-neg-badnet.Q8_0.gguf](https://huggingface.co/RichardErkhov/JINJIN7987_-_llama3-3b-neg-badnet-gguf/blob/main/llama3-3b-neg-badnet.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
deeponh/hindi_gemma_NORMAL_distil_9b_2b_R2 | deeponh | 2025-04-12T03:23:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T11:51:11Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nannnzk/task-7-microsoft-Phi-4-mini-instruct | nannnzk | 2025-04-12T03:17:39Z | 169 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"region:us"
]
| null | 2025-04-10T03:25:03Z | ---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
rosadecsai/led-large-16384-finetuned-paperLedWeS0.1 | rosadecsai | 2025-04-12T03:17:07Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"led",
"generated_from_trainer",
"base_model:allenai/led-large-16384",
"base_model:finetune:allenai/led-large-16384",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-11T08:41:02Z | ---
library_name: transformers
license: apache-2.0
base_model: allenai/led-large-16384
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-large-16384-finetuned-paperLedWeS0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-large-16384-finetuned-paperLedWeS0.1
This model is a fine-tuned version of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7967
- Rouge1: 40.6571
- Rouge2: 10.0122
- Rougel: 17.9043
- Rougelsum: 38.1189
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.9987 | 1.0 | 1250 | 2.9355 | 38.9717 | 9.2899 | 17.4013 | 36.6556 | 1.0 |
| 2.7629 | 2.0 | 2500 | 2.8530 | 39.9502 | 8.8402 | 18.0236 | 37.6682 | 1.0 |
| 2.612 | 3.0 | 3750 | 2.8171 | 38.3299 | 10.1131 | 17.1184 | 36.1227 | 1.0 |
| 2.4816 | 4.0 | 5000 | 2.8103 | 37.8863 | 9.0971 | 17.1661 | 35.1628 | 1.0 |
| 2.3609 | 5.0 | 6250 | 2.7967 | 40.6571 | 10.0122 | 17.9043 | 38.1189 | 1.0 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
lucas1026/aslora_Adamw_altfalse_lr3e-05_a8_r8_s8_seed31 | lucas1026 | 2025-04-12T03:15:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
]
| null | 2025-04-12T03:15:40Z | ---
base_model: meta-llama/Meta-Llama-3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.2.dev0 |
sleepdeprived3/Reformed-Christian-Bible-Expert-12B_EXL2_7.5bpw_H8 | sleepdeprived3 | 2025-04-12T03:14:59Z | 9 | 0 | null | [
"safetensors",
"mistral",
"Reformed",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"text-generation",
"conversational",
"base_model:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"base_model:quantized:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"license:apache-2.0",
"exl2",
"region:us"
]
| text-generation | 2025-02-17T00:48:04Z | ---
license: apache-2.0
base_model:
- sleepdeprived3/Reformed-Christian-Bible-Expert-12B
base_model_relation: quantized
tags:
- Reformed
- Christian
- Bible
- Theology
- Jesus
- Seminary
pipeline_tag: text-generation
---
# Reformed Christian Bible Expert
A specialized language model fine-tuned for Reformed theology and biblical studies. Based on `mistralai/Mistral-Nemo-Instruct-2407` for superior theological reasoning with a **128k token context window**.
## Features
- 🕊️ Answers theological questions from a Reformed/Calvinist perspective
- ✝️ Explains biblical passages with historical-grammatical hermeneutics
- 🎓 Assists with seminary studies and sermon preparation
- 💬 Can roleplay as a pastor for counseling scenarios
- 📜 Inherits 128k context window from base model
## Usage
**Chat Template:** Mistral V3 Tekken
**Recommended Settings:**
```python
{
"temperature": 0,
"top_k": 1,
"top_p": 0,
"min_p": 0,
"repetition_penalty": 1.18
}
```
**Example Prompt:**
```
[INST] Explain the doctrine of justification by faith alone from Romans 3:28 [/INST]
```
## Quantized Formats
- **EXL2 Collection**:
[Reformed-Christian-Bible-Expert EXL2 Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-exl2-67ace8acd900c8cadd4c2a4e)
- **GGUF Collection**:
[Reformed-Christian-Bible-Expert GGUF Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-gguf-67ace8b70d16eec807037c6e)
## Training Details
- **Base Model**: `mistralai/Mistral-Nemo-Instruct-2407` (128k context)
- **Fine-Tuning**: QLoRA on curated Reformed theological texts
- **License**: Apache 2.0
## Ethical Considerations
This model is designed to:
- Affirm the authority of Scripture (2 Tim 3:16)
- Uphold the Westminster Standards
- Avoid speculative theology
*Soli Deo Gloria* |
sleepdeprived3/Reformed-Christian-Bible-Expert-12B_EXL2_7bpw_H8 | sleepdeprived3 | 2025-04-12T03:14:52Z | 9 | 0 | null | [
"safetensors",
"mistral",
"Reformed",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"text-generation",
"conversational",
"base_model:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"base_model:quantized:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"license:apache-2.0",
"7-bit",
"exl2",
"region:us"
]
| text-generation | 2025-02-17T01:32:11Z | ---
license: apache-2.0
base_model:
- sleepdeprived3/Reformed-Christian-Bible-Expert-12B
base_model_relation: quantized
tags:
- Reformed
- Christian
- Bible
- Theology
- Jesus
- Seminary
pipeline_tag: text-generation
---
# Reformed Christian Bible Expert
A specialized language model fine-tuned for Reformed theology and biblical studies. Based on `mistralai/Mistral-Nemo-Instruct-2407` for superior theological reasoning with a **128k token context window**.
## Features
- 🕊️ Answers theological questions from a Reformed/Calvinist perspective
- ✝️ Explains biblical passages with historical-grammatical hermeneutics
- 🎓 Assists with seminary studies and sermon preparation
- 💬 Can roleplay as a pastor for counseling scenarios
- 📜 Inherits 128k context window from base model
## Usage
**Chat Template:** Mistral V3 Tekken
**Recommended Settings:**
```python
{
"temperature": 0,
"top_k": 1,
"top_p": 0,
"min_p": 0,
"repetition_penalty": 1.18
}
```
**Example Prompt:**
```
[INST] Explain the doctrine of justification by faith alone from Romans 3:28 [/INST]
```
## Quantized Formats
- **EXL2 Collection**:
[Reformed-Christian-Bible-Expert EXL2 Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-exl2-67ace8acd900c8cadd4c2a4e)
- **GGUF Collection**:
[Reformed-Christian-Bible-Expert GGUF Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-gguf-67ace8b70d16eec807037c6e)
## Training Details
- **Base Model**: `mistralai/Mistral-Nemo-Instruct-2407` (128k context)
- **Fine-Tuning**: QLoRA on curated Reformed theological texts
- **License**: Apache 2.0
## Ethical Considerations
This model is designed to:
- Affirm the authority of Scripture (2 Tim 3:16)
- Uphold the Westminster Standards
- Avoid speculative theology
*Soli Deo Gloria* |
deeponh/hindi_gemma_NORMAL_distil_9b_9b_R2 | deeponh | 2025-04-12T03:14:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T11:42:00Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sleepdeprived3/Reformed-Christian-Bible-Expert-12B_EXL2_6.5bpw_H8 | sleepdeprived3 | 2025-04-12T03:14:44Z | 12 | 0 | null | [
"safetensors",
"mistral",
"Reformed",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"text-generation",
"conversational",
"base_model:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"base_model:quantized:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"license:apache-2.0",
"exl2",
"region:us"
]
| text-generation | 2025-02-16T23:31:59Z | ---
license: apache-2.0
base_model:
- sleepdeprived3/Reformed-Christian-Bible-Expert-12B
base_model_relation: quantized
tags:
- Reformed
- Christian
- Bible
- Theology
- Jesus
- Seminary
pipeline_tag: text-generation
---
# Reformed Christian Bible Expert
A specialized language model fine-tuned for Reformed theology and biblical studies. Based on `mistralai/Mistral-Nemo-Instruct-2407` for superior theological reasoning with a **128k token context window**.
## Features
- 🕊️ Answers theological questions from a Reformed/Calvinist perspective
- ✝️ Explains biblical passages with historical-grammatical hermeneutics
- 🎓 Assists with seminary studies and sermon preparation
- 💬 Can roleplay as a pastor for counseling scenarios
- 📜 Inherits 128k context window from base model
## Usage
**Chat Template:** Mistral V3 Tekken
**Recommended Settings:**
```python
{
"temperature": 0,
"top_k": 1,
"top_p": 0,
"min_p": 0,
"repetition_penalty": 1.18
}
```
**Example Prompt:**
```
[INST] Explain the doctrine of justification by faith alone from Romans 3:28 [/INST]
```
## Quantized Formats
- **EXL2 Collection**:
[Reformed-Christian-Bible-Expert EXL2 Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-exl2-67ace8acd900c8cadd4c2a4e)
- **GGUF Collection**:
[Reformed-Christian-Bible-Expert GGUF Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-gguf-67ace8b70d16eec807037c6e)
## Training Details
- **Base Model**: `mistralai/Mistral-Nemo-Instruct-2407` (128k context)
- **Fine-Tuning**: QLoRA on curated Reformed theological texts
- **License**: Apache 2.0
## Ethical Considerations
This model is designed to:
- Affirm the authority of Scripture (2 Tim 3:16)
- Uphold the Westminster Standards
- Avoid speculative theology
*Soli Deo Gloria* |
sleepdeprived3/Reformed-Christian-Bible-Expert-12B_EXL2_5.5bpw_H8 | sleepdeprived3 | 2025-04-12T03:14:30Z | 6 | 0 | null | [
"safetensors",
"mistral",
"Reformed",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"text-generation",
"conversational",
"base_model:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"base_model:quantized:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"license:apache-2.0",
"exl2",
"region:us"
]
| text-generation | 2025-02-16T22:26:06Z | ---
license: apache-2.0
base_model:
- sleepdeprived3/Reformed-Christian-Bible-Expert-12B
base_model_relation: quantized
tags:
- Reformed
- Christian
- Bible
- Theology
- Jesus
- Seminary
pipeline_tag: text-generation
---
# Reformed Christian Bible Expert
A specialized language model fine-tuned for Reformed theology and biblical studies. Based on `mistralai/Mistral-Nemo-Instruct-2407` for superior theological reasoning with a **128k token context window**.
## Features
- 🕊️ Answers theological questions from a Reformed/Calvinist perspective
- ✝️ Explains biblical passages with historical-grammatical hermeneutics
- 🎓 Assists with seminary studies and sermon preparation
- 💬 Can roleplay as a pastor for counseling scenarios
- 📜 Inherits 128k context window from base model
## Usage
**Chat Template:** Mistral V3 Tekken
**Recommended Settings:**
```python
{
"temperature": 0,
"top_k": 1,
"top_p": 0,
"min_p": 0,
"repetition_penalty": 1.18
}
```
**Example Prompt:**
```
[INST] Explain the doctrine of justification by faith alone from Romans 3:28 [/INST]
```
## Quantized Formats
- **EXL2 Collection**:
[Reformed-Christian-Bible-Expert EXL2 Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-exl2-67ace8acd900c8cadd4c2a4e)
- **GGUF Collection**:
[Reformed-Christian-Bible-Expert GGUF Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-gguf-67ace8b70d16eec807037c6e)
## Training Details
- **Base Model**: `mistralai/Mistral-Nemo-Instruct-2407` (128k context)
- **Fine-Tuning**: QLoRA on curated Reformed theological texts
- **License**: Apache 2.0
## Ethical Considerations
This model is designed to:
- Affirm the authority of Scripture (2 Tim 3:16)
- Uphold the Westminster Standards
- Avoid speculative theology
*Soli Deo Gloria* |
nevermore-kang/codeparrot-ds | nevermore-kang | 2025-04-12T03:14:01Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-10T07:45:21Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.2
- Pytorch 2.6.0+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1
|
sleepdeprived3/Reformed-Christian-Bible-Expert-12B_EXL2_4bpw_H8 | sleepdeprived3 | 2025-04-12T03:14:00Z | 14 | 0 | null | [
"safetensors",
"mistral",
"Reformed",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"text-generation",
"conversational",
"base_model:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"base_model:quantized:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"license:apache-2.0",
"4-bit",
"exl2",
"region:us"
]
| text-generation | 2025-02-16T20:47:17Z | ---
license: apache-2.0
base_model:
- sleepdeprived3/Reformed-Christian-Bible-Expert-12B
base_model_relation: quantized
tags:
- Reformed
- Christian
- Bible
- Theology
- Jesus
- Seminary
pipeline_tag: text-generation
---
# Reformed Christian Bible Expert
A specialized language model fine-tuned for Reformed theology and biblical studies. Based on `mistralai/Mistral-Nemo-Instruct-2407` for superior theological reasoning with a **128k token context window**.
## Features
- 🕊️ Answers theological questions from a Reformed/Calvinist perspective
- ✝️ Explains biblical passages with historical-grammatical hermeneutics
- 🎓 Assists with seminary studies and sermon preparation
- 💬 Can roleplay as a pastor for counseling scenarios
- 📜 Inherits 128k context window from base model
## Usage
**Chat Template:** Mistral V3 Tekken
**Recommended Settings:**
```python
{
"temperature": 0,
"top_k": 1,
"top_p": 0,
"min_p": 0,
"repetition_penalty": 1.18
}
```
**Example Prompt:**
```
[INST] Explain the doctrine of justification by faith alone from Romans 3:28 [/INST]
```
## Quantized Formats
- **EXL2 Collection**:
[Reformed-Christian-Bible-Expert EXL2 Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-exl2-67ace8acd900c8cadd4c2a4e)
- **GGUF Collection**:
[Reformed-Christian-Bible-Expert GGUF Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-gguf-67ace8b70d16eec807037c6e)
## Training Details
- **Base Model**: `mistralai/Mistral-Nemo-Instruct-2407` (128k context)
- **Fine-Tuning**: QLoRA on curated Reformed theological texts
- **License**: Apache 2.0
## Ethical Considerations
This model is designed to:
- Affirm the authority of Scripture (2 Tim 3:16)
- Uphold the Westminster Standards
- Avoid speculative theology
*Soli Deo Gloria* |
sleepdeprived3/Reformed-Christian-Bible-Expert-12B_EXL2_3.5bpw_H8 | sleepdeprived3 | 2025-04-12T03:13:48Z | 10 | 0 | null | [
"safetensors",
"mistral",
"Reformed",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"text-generation",
"conversational",
"base_model:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"base_model:quantized:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"license:apache-2.0",
"exl2",
"region:us"
]
| text-generation | 2025-02-16T19:29:46Z | ---
license: apache-2.0
base_model:
- sleepdeprived3/Reformed-Christian-Bible-Expert-12B
base_model_relation: quantized
tags:
- Reformed
- Christian
- Bible
- Theology
- Jesus
- Seminary
pipeline_tag: text-generation
---
# Reformed Christian Bible Expert
A specialized language model fine-tuned for Reformed theology and biblical studies. Based on `mistralai/Mistral-Nemo-Instruct-2407` for superior theological reasoning with a **128k token context window**.
## Features
- 🕊️ Answers theological questions from a Reformed/Calvinist perspective
- ✝️ Explains biblical passages with historical-grammatical hermeneutics
- 🎓 Assists with seminary studies and sermon preparation
- 💬 Can roleplay as a pastor for counseling scenarios
- 📜 Inherits 128k context window from base model
## Usage
**Chat Template:** Mistral V3 Tekken
**Recommended Settings:**
```python
{
"temperature": 0,
"top_k": 1,
"top_p": 0,
"min_p": 0,
"repetition_penalty": 1.18
}
```
**Example Prompt:**
```
[INST] Explain the doctrine of justification by faith alone from Romans 3:28 [/INST]
```
## Quantized Formats
- **EXL2 Collection**:
[Reformed-Christian-Bible-Expert EXL2 Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-exl2-67ace8acd900c8cadd4c2a4e)
- **GGUF Collection**:
[Reformed-Christian-Bible-Expert GGUF Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-gguf-67ace8b70d16eec807037c6e)
## Training Details
- **Base Model**: `mistralai/Mistral-Nemo-Instruct-2407` (128k context)
- **Fine-Tuning**: QLoRA on curated Reformed theological texts
- **License**: Apache 2.0
## Ethical Considerations
This model is designed to:
- Affirm the authority of Scripture (2 Tim 3:16)
- Uphold the Westminster Standards
- Avoid speculative theology
*Soli Deo Gloria* |
sleepdeprived3/Reformed-Christian-Bible-Expert-12B_EXL2_3bpw_H8 | sleepdeprived3 | 2025-04-12T03:13:39Z | 15 | 0 | null | [
"safetensors",
"mistral",
"Reformed",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"text-generation",
"conversational",
"base_model:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"base_model:quantized:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"license:apache-2.0",
"3-bit",
"exl2",
"region:us"
]
| text-generation | 2025-02-16T19:55:09Z | ---
license: apache-2.0
base_model:
- sleepdeprived3/Reformed-Christian-Bible-Expert-12B
base_model_relation: quantized
tags:
- Reformed
- Christian
- Bible
- Theology
- Jesus
- Seminary
pipeline_tag: text-generation
---
# Reformed Christian Bible Expert
A specialized language model fine-tuned for Reformed theology and biblical studies. Based on `mistralai/Mistral-Nemo-Instruct-2407` for superior theological reasoning with a **128k token context window**.
## Features
- 🕊️ Answers theological questions from a Reformed/Calvinist perspective
- ✝️ Explains biblical passages with historical-grammatical hermeneutics
- 🎓 Assists with seminary studies and sermon preparation
- 💬 Can roleplay as a pastor for counseling scenarios
- 📜 Inherits 128k context window from base model
## Usage
**Chat Template:** Mistral V3 Tekken
**Recommended Settings:**
```python
{
"temperature": 0,
"top_k": 1,
"top_p": 0,
"min_p": 0,
"repetition_penalty": 1.18
}
```
**Example Prompt:**
```
[INST] Explain the doctrine of justification by faith alone from Romans 3:28 [/INST]
```
## Quantized Formats
- **EXL2 Collection**:
[Reformed-Christian-Bible-Expert EXL2 Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-exl2-67ace8acd900c8cadd4c2a4e)
- **GGUF Collection**:
[Reformed-Christian-Bible-Expert GGUF Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-gguf-67ace8b70d16eec807037c6e)
## Training Details
- **Base Model**: `mistralai/Mistral-Nemo-Instruct-2407` (128k context)
- **Fine-Tuning**: QLoRA on curated Reformed theological texts
- **License**: Apache 2.0
## Ethical Considerations
This model is designed to:
- Affirm the authority of Scripture (2 Tim 3:16)
- Uphold the Westminster Standards
- Avoid speculative theology
*Soli Deo Gloria* |
sleepdeprived3/Reformed-Christian-Bible-Expert-12B_EXL2_1.5bpw_H8 | sleepdeprived3 | 2025-04-12T03:13:11Z | 10 | 0 | null | [
"safetensors",
"mistral",
"Reformed",
"Christian",
"Bible",
"Theology",
"Jesus",
"Seminary",
"text-generation",
"conversational",
"base_model:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"base_model:quantized:sleepdeprived3/Reformed-Christian-Bible-Expert-12B",
"license:apache-2.0",
"exl2",
"region:us"
]
| text-generation | 2025-02-16T18:33:28Z | ---
license: apache-2.0
base_model:
- sleepdeprived3/Reformed-Christian-Bible-Expert-12B
base_model_relation: quantized
tags:
- Reformed
- Christian
- Bible
- Theology
- Jesus
- Seminary
pipeline_tag: text-generation
---
# Reformed Christian Bible Expert
A specialized language model fine-tuned for Reformed theology and biblical studies. Based on `mistralai/Mistral-Nemo-Instruct-2407` for superior theological reasoning with a **128k token context window**.
## Features
- 🕊️ Answers theological questions from a Reformed/Calvinist perspective
- ✝️ Explains biblical passages with historical-grammatical hermeneutics
- 🎓 Assists with seminary studies and sermon preparation
- 💬 Can roleplay as a pastor for counseling scenarios
- 📜 Inherits 128k context window from base model
## Usage
**Chat Template:** Mistral V3 Tekken
**Recommended Settings:**
```python
{
"temperature": 0,
"top_k": 1,
"top_p": 0,
"min_p": 0,
"repetition_penalty": 1.18
}
```
**Example Prompt:**
```
[INST] Explain the doctrine of justification by faith alone from Romans 3:28 [/INST]
```
## Quantized Formats
- **EXL2 Collection**:
[Reformed-Christian-Bible-Expert EXL2 Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-exl2-67ace8acd900c8cadd4c2a4e)
- **GGUF Collection**:
[Reformed-Christian-Bible-Expert GGUF Models](https://huggingface.co/collections/sleepdeprived3/reformed-christian-bible-expert-gguf-67ace8b70d16eec807037c6e)
## Training Details
- **Base Model**: `mistralai/Mistral-Nemo-Instruct-2407` (128k context)
- **Fine-Tuning**: QLoRA on curated Reformed theological texts
- **License**: Apache 2.0
## Ethical Considerations
This model is designed to:
- Affirm the authority of Scripture (2 Tim 3:16)
- Uphold the Westminster Standards
- Avoid speculative theology
*Soli Deo Gloria* |
TOTORONG/Mistral_lora_model_250412 | TOTORONG | 2025-04-12T03:06:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral3",
"trl",
"en",
"base_model:unsloth/Mistral-Small-3.1-24B-Instruct-2503-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Mistral-Small-3.1-24B-Instruct-2503-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T03:06:06Z | ---
base_model: unsloth/Mistral-Small-3.1-24B-Instruct-2503-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TOTORONG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Small-3.1-24B-Instruct-2503-unsloth-bnb-4bit
This mistral3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf | RichardErkhov | 2025-04-12T03:04:46Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-12T02:28:11Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama32_smol_summarize_50k - GGUF
- Model creator: https://huggingface.co/nguyenthanhdo/
- Original model: https://huggingface.co/nguyenthanhdo/llama32_smol_summarize_50k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama32_smol_summarize_50k.Q2_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama32_smol_summarize_50k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama32_smol_summarize_50k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama32_smol_summarize_50k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama32_smol_summarize_50k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama32_smol_summarize_50k.Q3_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama32_smol_summarize_50k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama32_smol_summarize_50k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama32_smol_summarize_50k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama32_smol_summarize_50k.Q4_0.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama32_smol_summarize_50k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama32_smol_summarize_50k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama32_smol_summarize_50k.Q4_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama32_smol_summarize_50k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama32_smol_summarize_50k.Q4_1.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama32_smol_summarize_50k.Q5_0.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama32_smol_summarize_50k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama32_smol_summarize_50k.Q5_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama32_smol_summarize_50k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama32_smol_summarize_50k.Q5_1.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama32_smol_summarize_50k.Q6_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama32_smol_summarize_50k.Q8_0.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_summarize_50k-gguf/blob/main/llama32_smol_summarize_50k.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nathanialhunt2000/38e85648-9d0a-458d-8bc8-dd4bcf0a7f98 | nathanialhunt2000 | 2025-04-12T03:04:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:832b92f9c983099f_train_data.json",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"region:us"
]
| null | 2025-04-12T03:04:27Z | ---
library_name: peft
tags:
- generated_from_trainer
datasets:
- 832b92f9c983099f_train_data.json
base_model: fxmarty/tiny-dummy-qwen2
model-index:
- name: nathanialhunt2000/38e85648-9d0a-458d-8bc8-dd4bcf0a7f98
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/38e85648-9d0a-458d-8bc8-dd4bcf0a7f98
This model was trained from scratch on the /workspace/input_data/832b92f9c983099f_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.0
- Transformers 4.50.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
Sourabhsahu22/demosourabh | Sourabhsahu22 | 2025-04-12T02:59:30Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
]
| null | 2025-04-12T02:59:29Z | ---
license: artistic-2.0
---
|
RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf | RichardErkhov | 2025-04-12T02:57:53Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-12T02:20:48Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama32_smol_rewrite_50k - GGUF
- Model creator: https://huggingface.co/nguyenthanhdo/
- Original model: https://huggingface.co/nguyenthanhdo/llama32_smol_rewrite_50k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama32_smol_rewrite_50k.Q2_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama32_smol_rewrite_50k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama32_smol_rewrite_50k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama32_smol_rewrite_50k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama32_smol_rewrite_50k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama32_smol_rewrite_50k.Q3_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama32_smol_rewrite_50k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama32_smol_rewrite_50k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama32_smol_rewrite_50k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama32_smol_rewrite_50k.Q4_0.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama32_smol_rewrite_50k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama32_smol_rewrite_50k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama32_smol_rewrite_50k.Q4_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama32_smol_rewrite_50k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama32_smol_rewrite_50k.Q4_1.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama32_smol_rewrite_50k.Q5_0.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama32_smol_rewrite_50k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama32_smol_rewrite_50k.Q5_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama32_smol_rewrite_50k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama32_smol_rewrite_50k.Q5_1.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama32_smol_rewrite_50k.Q6_K.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama32_smol_rewrite_50k.Q8_0.gguf](https://huggingface.co/RichardErkhov/nguyenthanhdo_-_llama32_smol_rewrite_50k-gguf/blob/main/llama32_smol_rewrite_50k.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
deeponh/bengali_llama_NORMAL_distil_8b_8b_R2 | deeponh | 2025-04-12T02:57:34Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-06T12:04:37Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
oobabooga/Phi-3-mini-4k-instruct-old-exl3 | oobabooga | 2025-04-12T02:55:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-12T02:07:13Z | ---
license: apache-2.0
---
EXL3 quants of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), revision `65be4e00a56c16d036e9cbe96b0b35f8aa0f84b0`.
This corresponds to [bartowski/Phi-3-mini-4k-instruct-old-GGUF](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-old-GGUF).
| Bits per Weight | Model Size | Link |
|-----------------|------------|------|
| 2.0 bpw | 1.18 GB | [oobabooga/Phi-3-mini-4k-instruct-old-exl3:2.0bpw](https://huggingface.co/oobabooga/Phi-3-mini-4k-instruct-old-exl3/tree/2.0bpw) |
| 2.25 bpw | 1.29 GB | [oobabooga/Phi-3-mini-4k-instruct-old-exl3:2.25bpw](https://huggingface.co/oobabooga/Phi-3-mini-4k-instruct-old-exl3/tree/2.25bpw) |
| 2.5 bpw | 1.41 GB | [oobabooga/Phi-3-mini-4k-instruct-old-exl3:2.5bpw](https://huggingface.co/oobabooga/Phi-3-mini-4k-instruct-old-exl3/tree/2.5bpw) |
| 3.0 bpw | 1.63 GB | [oobabooga/Phi-3-mini-4k-instruct-old-exl3:3.0bpw](https://huggingface.co/oobabooga/Phi-3-mini-4k-instruct-old-exl3/tree/3.0bpw) |
| 3.5 bpw | 1.86 GB | [oobabooga/Phi-3-mini-4k-instruct-old-exl3:3.5bpw](https://huggingface.co/oobabooga/Phi-3-mini-4k-instruct-old-exl3/tree/3.5bpw) |
| 4.0 bpw | 2.09 GB | [oobabooga/Phi-3-mini-4k-instruct-old-exl3:4.0bpw](https://huggingface.co/oobabooga/Phi-3-mini-4k-instruct-old-exl3/tree/4.0bpw) |
| 5.0 bpw | 2.54 GB | [oobabooga/Phi-3-mini-4k-instruct-old-exl3:5.0bpw](https://huggingface.co/oobabooga/Phi-3-mini-4k-instruct-old-exl3/tree/5.0bpw) |
| 6.0 bpw | 2.99 GB | [oobabooga/Phi-3-mini-4k-instruct-old-exl3:6.0bpw](https://huggingface.co/oobabooga/Phi-3-mini-4k-instruct-old-exl3/tree/6.0bpw) |
| 8.0 bpw | 3.90 GB | [oobabooga/Phi-3-mini-4k-instruct-old-exl3:8.0bpw](https://huggingface.co/oobabooga/Phi-3-mini-4k-instruct-old-exl3/tree/8.0bpw) | |
Pedrodom/gpt2-commentary-finetuned | Pedrodom | 2025-04-12T02:54:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-12T02:54:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xmlans/benxi-voice | xmlans | 2025-04-12T02:54:39Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2025-04-08T16:00:57Z | ---
license: mit
---
该模型基于so-vits-svc项目训练,其目的是为了还原已故歌手本兮的声音,以便用她的音色演唱许多流行歌曲。
希望本兮拥有一个数字生命,她的歌声能够永远陪伴着大家 |
keita-jun-bowchans/Bowchan-1-open | keita-jun-bowchans | 2025-04-12T02:50:47Z | 0 | 0 | null | [
"safetensors",
"gpt-scratch",
"text-generation",
"arxiv:1910.09700",
"region:us"
]
| text-generation | 2025-04-11T09:16:07Z | ---
pipeline_tag: text-generation
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/Zyphra.ZR1-1.5B-GGUF | DevQuasar | 2025-04-12T02:49:09Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:Zyphra/ZR1-1.5B",
"base_model:quantized:Zyphra/ZR1-1.5B",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-04-12T02:02:52Z | ---
base_model:
- Zyphra/ZR1-1.5B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [Zyphra/ZR1-1.5B](https://huggingface.co/Zyphra/ZR1-1.5B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
hungphongtrn/qwen2-5-3B-semantic-reward-grpo | hungphongtrn | 2025-04-12T02:48:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
]
| null | 2025-04-11T08:14:37Z | ---
base_model: unsloth/qwen2.5-3b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Dc-4nderson/EverFlora-Llama-3.2-1B-Finetuned2 | Dc-4nderson | 2025-04-12T02:46:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-12T02:42:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/R1-Onevision-7B-RL-GGUF | mradermacher | 2025-04-12T02:45:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Fancy-MLLM/R1-Onevision-7B-RL",
"base_model:quantized:Fancy-MLLM/R1-Onevision-7B-RL",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-11T21:35:53Z | ---
base_model: Fancy-MLLM/R1-Onevision-7B-RL
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Fancy-MLLM/R1-Onevision-7B-RL
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/R1-Onevision-7B-RL-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/R1-Onevision-7B-RL-GGUF/resolve/main/R1-Onevision-7B-RL.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Kotajiro/anzu_flux | Kotajiro | 2025-04-12T02:45:13Z | 0 | 16 | diffusers | [
"diffusers",
"realistic",
"flux.1",
"lora",
"diffusion",
"ja",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2024-08-26T23:20:31Z | ---
license: apache-2.0
language:
- ja
- en
tags:
- realistic
- flux.1
- lora
- diffusion
library_name: diffusers
datasets: []
---
# anzu_flux モデルシリーズ(flux.1由来)
本モデルおよび「flux.1」由来の全てのモデル(名前に `flux` を含むもの)は、**flux1.[dev]** をベースとし、**Black Forest Labs の利用規約**に準拠します。ご利用の際は、flux1-dev に定められた制限事項を遵守してください。
## 🔸 モデルの構成
| モデル名 | 内容・特徴 |
|----------|------------|
| `anzu_flax_lora_v10` | v1.0 正式リリース版 |
| `anzu_flux_Mix_beta01_.safetensors` | flux.1-dev fp8 に `anzu_flux_lora`(開発版)を 0.6 の重みでマージ |
| `anzu_flax_lora_e010` | Epoch10 (Step500)、v1.0 よりマイルドな顔立ち傾向 |
| `anzu_flux_lora_v20d4` | v2.0 軽量化バージョン |
| `anzu-flux-lora_v20` | v2.0 正式リリース版 |
## 🔸 使用に関する注意事項
- 本モデルの**商用利用は禁止**です。
- 本モデルで生成された画像を**他の学習用データとして再利用することはできません**(ライセンス上、禁止されています)。
- 生成画像を公開する際は、**モデル名を明記**してください。
- SNS等での公開時には、**ハッシュタグ `#anzu_flux` の付与を推奨**します。
## 🚫 使用禁止のケース(重要)
以下のような用途への使用は**固く禁止**します:
- 暴力的な表現
- 児童ポルノ
- 未成年者の画像および性的な表現、またはそれに準ずる容姿(例:水着・下着 等)
- 過度に性的な表現
- 実在の人物に類似した画像を本人の許可なく公開する行為
※ただし、**本人公認のLoRAを使用する場合は除きます。**
## ⚠️ 免責事項
- 本モデルの使用により発生したいかなる問題(生成画像に関する苦情・法的トラブル等)についても、**開発者は一切の責任を負いません**。
- 利用者自身の責任においてご使用ください。
---
## 🔧 ベースモデル情報
- ベースモデル:`flux1-dev`
- 出力形式:LoRA、safetensors(fp8ベース)
|
matrixportal/Profesor-Dare_Ties | matrixportal | 2025-04-12T02:44:41Z | 0 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"meta-llama/Llama-3.1-8B",
"tr",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:merge:meta-llama/Llama-3.1-8B",
"base_model:ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"base_model:merge:ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"region:us"
]
| null | 2025-04-12T01:13:11Z | ---
base_model:
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
- meta-llama/Llama-3.1-8B
tags:
- merge
- mergekit
- lazymergekit
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
- meta-llama/Llama-3.1-8B
language:
- tr
- en
---
# Profesor-Dare_Ties
Profesor-Dare_Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1](https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1)
* [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)
## 🧩 Configuration
```yaml
merge_method: dare_ties
base_model: meta-llama/Llama-3.1-8B
dtype: bfloat16
models:
- model: ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
parameters: {weight: 0.75, density: 0.7}
- model: meta-llama/Llama-3.1-8B
parameters: {weight: 0.25, density: 0.3}
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "matrixportal/Profesor-Dare_Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
kushalballari/distilhubert-tone-classification | kushalballari | 2025-04-12T02:43:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:Emo-Codec/CREMA-D_synth",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2025-04-12T01:31:37Z | ---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- Emo-Codec/CREMA-D_synth
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilhubert-tone-classification
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: CREMA-D
type: Emo-Codec/CREMA-D_synth
metrics:
- name: Accuracy
type: accuracy
value: 0.7024128686327078
- name: Precision
type: precision
value: 0.7036509389001218
- name: Recall
type: recall
value: 0.7024128686327078
- name: F1
type: f1
value: 0.6970142752522046
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-tone-classification
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the CREMA-D dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1479
- Accuracy: 0.7024
- Precision: 0.7037
- Recall: 0.7024
- F1: 0.6970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.339 | 1.0 | 442 | 1.3491 | 0.4987 | 0.5533 | 0.4987 | 0.4664 |
| 1.0008 | 2.0 | 884 | 1.0219 | 0.6408 | 0.6668 | 0.6408 | 0.6373 |
| 0.7673 | 3.0 | 1326 | 0.9572 | 0.6676 | 0.6870 | 0.6676 | 0.6557 |
| 0.5888 | 4.0 | 1768 | 0.8830 | 0.6890 | 0.6930 | 0.6890 | 0.6889 |
| 0.4396 | 5.0 | 2210 | 1.0893 | 0.6810 | 0.7064 | 0.6810 | 0.6738 |
| 0.2987 | 6.0 | 2652 | 1.0561 | 0.6810 | 0.6892 | 0.6810 | 0.6738 |
| 0.2009 | 7.0 | 3094 | 1.1421 | 0.6836 | 0.6944 | 0.6836 | 0.6769 |
| 0.1345 | 8.0 | 3536 | 1.1479 | 0.7024 | 0.7037 | 0.7024 | 0.6970 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
matrixportal/TurkLlama-Mix-Dare_Ties-GGUF | matrixportal | 2025-04-12T02:43:08Z | 0 | 1 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"meta-llama/Llama-3.1-8B",
"tr",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:merge:meta-llama/Llama-3.1-8B",
"base_model:ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"base_model:merge:ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-11T13:01:35Z | ---
base_model:
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
- meta-llama/Llama-3.1-8B
tags:
- merge
- mergekit
- lazymergekit
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
- meta-llama/Llama-3.1-8B
language:
- tr
- en
---
# TurkLlama-Mix-Dare_Ties GGUF Quantized Models
## Technical Details
- **Quantization Tool:** llama.cpp
- **Version:** version: 5116 (b2034c2b)
## Model Information
- **Base Model:** [matrixportal/TurkLlama-Mix-Dare_Ties](https://huggingface.co/matrixportal/TurkLlama-Mix-Dare_Ties)
- **Quantized by:** [matrixportal](https://huggingface.co/matrixportal)
## Available Files
| 🚀 Download | 🔢 Type | 📝 Description |
|------------|---------|---------------|
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-Dare_Ties-GGUF/resolve/main/turkllama-mix-dare-ties.q2_k.gguf) | Q2 K | Tiny size, lowest quality (emergency use only) |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-Dare_Ties-GGUF/resolve/main/turkllama-mix-dare-ties.q3_k_m.gguf) | Q3 K M | Small, acceptable quality |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-Dare_Ties-GGUF/resolve/main/turkllama-mix-dare-ties.q4_0.gguf) | Q4 0 | Standard 4-bit (fast on ARM) |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-Dare_Ties-GGUF/resolve/main/turkllama-mix-dare-ties.q4_k_m.gguf) | Q4 K M | 4-bit balanced (recommended default) |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-Dare_Ties-GGUF/resolve/main/turkllama-mix-dare-ties.q5_k_m.gguf) | Q5 K M | 5-bit best (recommended HQ option) |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-Dare_Ties-GGUF/resolve/main/turkllama-mix-dare-ties.q6_k.gguf) | Q6 K | 6-bit near-perfect (premium quality) |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-Dare_Ties-GGUF/resolve/main/turkllama-mix-dare-ties.q8_0.gguf) | Q8 0 | 8-bit maximum (overkill for most) |
💡 **Q4 K M** provides the best balance for most use cases |
RichardErkhov/aidadev48_-_aidav7-gguf | RichardErkhov | 2025-04-12T02:42:40Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-12T02:05:04Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
aidav7 - GGUF
- Model creator: https://huggingface.co/aidadev48/
- Original model: https://huggingface.co/aidadev48/aidav7/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [aidav7.Q2_K.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q2_K.gguf) | Q2_K | 1.27GB |
| [aidav7.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [aidav7.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [aidav7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [aidav7.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [aidav7.Q3_K.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q3_K.gguf) | Q3_K | 1.57GB |
| [aidav7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [aidav7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [aidav7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [aidav7.Q4_0.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q4_0.gguf) | Q4_0 | 1.79GB |
| [aidav7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [aidav7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [aidav7.Q4_K.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q4_K.gguf) | Q4_K | 1.88GB |
| [aidav7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [aidav7.Q4_1.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q4_1.gguf) | Q4_1 | 1.95GB |
| [aidav7.Q5_0.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q5_0.gguf) | Q5_0 | 2.11GB |
| [aidav7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [aidav7.Q5_K.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q5_K.gguf) | Q5_K | 2.16GB |
| [aidav7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [aidav7.Q5_1.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q5_1.gguf) | Q5_1 | 2.28GB |
| [aidav7.Q6_K.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q6_K.gguf) | Q6_K | 2.46GB |
| [aidav7.Q8_0.gguf](https://huggingface.co/RichardErkhov/aidadev48_-_aidav7-gguf/blob/main/aidav7.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** aidadev48
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mikeogezi/data_3b_lora_64_sample_100 | mikeogezi | 2025-04-12T02:42:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T02:41:54Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
clepelaars/task-7-Qwen-Qwen1.5-14B-Chat-2 | clepelaars | 2025-04-12T02:41:27Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-12T02:38:44Z | ---
base_model: Qwen/Qwen1.5-14B-Chat
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
matrixportal/TurkLlama-Mix-v2-GGUF | matrixportal | 2025-04-12T02:41:18Z | 6 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"meta-llama/Llama-3.1-8B",
"ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"tr",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:merge:meta-llama/Llama-3.1-8B",
"base_model:ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"base_model:merge:ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-11T01:32:37Z | ---
base_model:
- meta-llama/Llama-3.1-8B
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
tags:
- merge
- mergekit
- lazymergekit
- meta-llama/Llama-3.1-8B
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
language:
- tr
- en
---
# TurkLlama-Mix-v2 GGUF Quantized Models
## Technical Details
- **Quantization Tool:** llama.cpp
- **Version:** version: 5106 (47ba87d0)
## Model Information
- **Base Model:** [matrixportal/TurkLlama-Mix-v2](https://huggingface.co/matrixportal/TurkLlama-Mix-v2)
- **Quantized by:** [matrixportal](https://huggingface.co/matrixportal)
## Available Files
| 🚀 Download | 🔢 Type | 📝 Description |
|------------|---------|---------------|
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-v2-GGUF/resolve/main/turkllama-mix-v2.q3_k_m.gguf) | Q3 K M | Small, acceptable quality |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-v2-GGUF/resolve/main/turkllama-mix-v2.q4_0.gguf) | Q4 0 | Standard 4-bit (fast on ARM) |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-v2-GGUF/resolve/main/turkllama-mix-v2.q4_k_m.gguf) | Q4 K M | 4-bit balanced (recommended default) |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-v2-GGUF/resolve/main/turkllama-mix-v2.q5_k_m.gguf) | Q5 K M | 5-bit best (recommended HQ option) |
💡 **Q4 K M** provides the best balance for most use cases |
matrixportal/TurkLlama-Mix-GGUF | matrixportal | 2025-04-12T02:40:11Z | 10 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"meta-llama/Meta-Llama-3-8B",
"ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"tr",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:merge:meta-llama/Meta-Llama-3-8B",
"base_model:ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"base_model:merge:ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-11T00:30:40Z | ---
base_model:
- meta-llama/Meta-Llama-3-8B
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
tags:
- merge
- mergekit
- lazymergekit
- meta-llama/Meta-Llama-3-8B
- ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1
license: llama3
language:
- tr
- en
---
# TurkLlama-Mix GGUF Quantized Models
## Technical Details
- **Quantization Tool:** llama.cpp
- **Version:** version: 5106 (47ba87d0)
## Model Information
- **Base Model:** [matrixportal/TurkLlama-Mix](https://huggingface.co/matrixportal/TurkLlama-Mix)
- **Quantized by:** [matrixportal](https://huggingface.co/matrixportal)
## Available Files
| 🚀 Download | 🔢 Type | 📝 Description |
|------------|---------|---------------|
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-GGUF/resolve/main/turkllama-mix.q3_k_m.gguf) | Q3 K M | Small, acceptable quality |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-GGUF/resolve/main/turkllama-mix.q4_0.gguf) | Q4 0 | Standard 4-bit (fast on ARM) |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-GGUF/resolve/main/turkllama-mix.q4_k_m.gguf) | Q4 K M | 4-bit balanced (recommended default) |
| [Download](https://huggingface.co/matrixportal/TurkLlama-Mix-GGUF/resolve/main/turkllama-mix.q5_k_m.gguf) | Q5 K M | 5-bit best (recommended HQ option) |
💡 **Q4 K M** provides the best balance for most use cases |
FACADEEEE/medalu_16bit_v2 | FACADEEEE | 2025-04-12T02:40:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-12T02:37:18Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** FACADEEEE
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FACADEEEE/adapter_medalu_quechua_v2 | FACADEEEE | 2025-04-12T02:36:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T02:36:40Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** FACADEEEE
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sleepdeprived3/Baptist-Christian-Bible-Expert-v2.0-12B-LoRA | sleepdeprived3 | 2025-04-12T02:34:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"dataset:data/data.jsonl",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:adapter:mistralai/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-12T02:32:43Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-Nemo-Instruct-2407
tags:
- generated_from_trainer
datasets:
- data/data.jsonl
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.1`
```yaml
base_model: mistralai/Mistral-Nemo-Instruct-2407
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
model_config:
trust_remote_code: true
tokenizer:
pad_token: "</s>"
padding_side: "right"
add_bos_token: true
add_eos_token: false
datasets:
- path: data/data.jsonl
type: chat_template
chat_template_strategy: tokenizer
field_messages: conversations
message_property_mappings:
role: role
content: content
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
load_in_4bit: true
adapter: qlora
lora_r: 64
lora_alpha: 32
lora_dropout: 0.1
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
bf16: true
flash_attention: true
gradient_checkpointing: true
deepspeed: deepspeed_configs/zero2.json
gradient_accumulation_steps: 4
micro_batch_size: 8
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 3e-6
warmup_ratio: 0.02
max_seq_length: 8192
pad_to_sequence_len: true
sample_packing: true
max_grad_norm: 1.0
output_dir: ./output
save_steps: 100
logging_steps: 10
save_safetensors: true
special_tokens:
pad_token: "</s>"
```
</details><br>
# output
This model is a fine-tuned version of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) on the data/data.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 9
- num_epochs: 2.0
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
alpcaferoglu/Qwen2.5-Coder-3B-Instruct-bnb-4bit__bird__dev__california_schools__t2s-sl-dc__r16_a32_e2_bs2 | alpcaferoglu | 2025-04-12T02:31:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-11T20:24:34Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf | RichardErkhov | 2025-04-12T02:30:14Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-11T22:50:54Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
zephyr-7b-align-scan-0.0-0.3-polynomial-3 - GGUF
- Model creator: https://huggingface.co/taicheng/
- Original model: https://huggingface.co/taicheng/zephyr-7b-align-scan-0.0-0.3-polynomial-3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q2_K.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q2_K.gguf) | Q2_K | 2.53GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q3_K.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q3_K.gguf) | Q3_K | 3.28GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_0.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_0.gguf) | Q4_0 | 3.83GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_K.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_K.gguf) | Q4_K | 4.07GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_1.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q4_1.gguf) | Q4_1 | 4.24GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_0.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_0.gguf) | Q5_0 | 4.65GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_K.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_K.gguf) | Q5_K | 4.78GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_1.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q5_1.gguf) | Q5_1 | 5.07GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q6_K.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q6_K.gguf) | Q6_K | 5.53GB |
| [zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q8_0.gguf](https://huggingface.co/RichardErkhov/taicheng_-_zephyr-7b-align-scan-0.0-0.3-polynomial-3-gguf/blob/main/zephyr-7b-align-scan-0.0-0.3-polynomial-3.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-align-scan-0.0-0.3-polynomial-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-align-scan-0.0-0.3-polynomial-3
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7657
- Rewards/chosen: -1.7538
- Rewards/rejected: -2.6151
- Rewards/accuracies: 0.3194
- Rewards/margins: 0.8612
- Logps/rejected: -90.8353
- Logps/chosen: -81.0013
- Logits/rejected: -2.4771
- Logits/chosen: -2.4954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.9336856595926955e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.529 | 1.0417 | 100 | 0.6391 | 0.3319 | 0.0135 | 0.3353 | 0.3184 | -81.0782 | -73.2591 | -2.5045 | -2.5211 |
| 0.3646 | 2.0833 | 200 | 0.6854 | -0.2626 | -0.8523 | 0.3254 | 0.5897 | -84.2922 | -75.4660 | -2.5243 | -2.5414 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
|
yannn001/eedo-7B-v1.0 | yannn001 | 2025-04-12T02:23:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Text",
"LLMA",
"conversational",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
]
| text-generation | 2025-04-11T10:24:46Z | ---
tags:
- Text
- LLMA
license: gpl-3.0
language:
- en
library_name: transformers
--- |
mikeogezi/data_1b_lora_128_sample_950 | mikeogezi | 2025-04-12T02:23:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T02:22:50Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
liuwenhan/RankMistral100 | liuwenhan | 2025-04-12T02:22:02Z | 57 | 2 | null | [
"safetensors",
"mistral",
"text-ranking",
"en",
"arxiv:2412.14574",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:mit",
"region:us"
]
| text-ranking | 2024-12-18T12:36:11Z | ---
license: mit
language:
- en
base_model:
- mistralai/Mistral-7B-Instruct-v0.3
pipeline_tag: text-ranking
---
## Model Information
We release the full ranking model RankMistral100 distilled from GPT-4o-2024-08-06 used in **Sliding Windows Are Not the End: Exploring Full Ranking with Long-Context Large Language Models**.
<p align="left">
Useful links: 📝 <a href="https://arxiv.org/abs/2412.14574" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/datasets/liuwenhan/msmarco_full_ranking_list" target="_blank">Dataset</a> • </a> 🧩 <a href="https://github.com/8421BCD/fullrank" target="_blank">Github</a>
</p>
## Training framework
Our full ranking model aims to directly rerank 100 passages at a time, abandoning the sliding window strategy. We propose a multi-pass sliding window approach for generating the full ranking list as label and design a importance-aware training loss for optimization.
<img src="https://8421bcd.oss-cn-beijing.aliyuncs.com/img/image-20241218200920116.png" alt="image-20241218200920116" style="zoom: 45%;" />
## Backbone Model
RankMistral100 is finetuned from https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3.
## Performance
We surpuss the strong baseline RankZephyr with 1.2 points on BEIR Avg.
| Models | Covid | DBPedia | SciFact | NFCorpus | Signal | Robust04 | Touche | News | Avg. |
| ------------------------- | ----- | ------- | ------- | -------- | ------ | -------- | ------ | ----- | --------- |
| BM25 | 59.47 | 31.80 | 67.89 | 33.75 | 33.04 | 40.70 | 44.22 | 39.52 | 43.80 |
| monoBERT (340M) | 73.45 | 41.69 | 62.22 | 34.92 | 30.63 | 44.21 | 30.26 | 47.03 | 45.55 |
| monoT5 (220M) | 75.94 | 42.43 | 65.07 | 35.42 | 31.20 | 44.15 | 30.35 | 46.98 | 46.44 |
| RankVicuna (7B) | 79.19 | 44.51 | 70.67 | 34.51 | 34.24 | 48.33 | 33.00 | 47.15 | 48.95 |
| RankZepeyer (7B) | 82.92 | 44.42 | 75.42 | 38.26 | 31.41 | 53.73 | 30.22 | 52.80 | 51.15 |
| RankMistral<sub>100</sub> (7B) | 82.24 | 43.54 | 77.04 | 39.14 | 33.99 | 57.91 | 34.63 | 50.59 | **52.40** |
🌹 If you use this model, please ✨star our <a href="https://github.com/8421BCD/fullrank" target="_blank">GitHub repository</a> to support us. Your star means a lot! |
deeponh/hindi_llama_NORMAL_distil_8b_8b | deeponh | 2025-04-12T02:14:20Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T14:33:46Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mikeogezi/data_1b_lora_128_sample_500 | mikeogezi | 2025-04-12T02:11:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T02:10:39Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qLhwaa/newzev_flux | qLhwaa | 2025-04-12T02:09:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-04-12T01:39:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: zebjdia23
---
# Newzev_Flux
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `zebjdia23` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "zebjdia23",
"lora_weights": "https://huggingface.co/qLhwaa/newzev_flux/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('qLhwaa/newzev_flux', weight_name='lora.safetensors')
image = pipeline('zebjdia23').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/qLhwaa/newzev_flux/discussions) to add images that show off what you’ve made with this LoRA.
|
thanghf/math_model_v2 | thanghf | 2025-04-12T02:09:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-12T02:04:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deeponh/bengali_llama_NORMAL_distil_8b_3b | deeponh | 2025-04-12T01:57:22Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T14:06:07Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
komado/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-patterned_dextrous_okapi | komado | 2025-04-12T01:55:57Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am patterned dextrous okapi",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-03T02:16:23Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-patterned_dextrous_okapi
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am patterned dextrous okapi
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-patterned_dextrous_okapi
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="komado/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-patterned_dextrous_okapi", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bowilleatyou/a3b6985b-1912-4b2b-a160-360cdf88a45d | bowilleatyou | 2025-04-12T01:52:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-11T16:46:23Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
btbtyler09/cogito-v1-preview-qwen-32B-gptq-8bit | btbtyler09 | 2025-04-12T01:52:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"8bit",
"gptq",
"conversational",
"base_model:deepcogito/cogito-v1-preview-qwen-32B",
"base_model:quantized:deepcogito/cogito-v1-preview-qwen-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
]
| text-generation | 2025-04-12T01:45:56Z | ---
license: apache-2.0
library_name: transformers
base_model: deepcogito/cogito-v1-preview-qwen-32B
pipeline_tag: text-generation
tags:
- 8bit
- gptq
---
# Cogito v1 preview - 32B
GPTQ quantization of Cogito v1 preview - 32B based on Qwen QwQ. This is an 8bit gptq quantization.
[Blog Post](https://www.deepcogito.com/research/cogito-v1-preview) |
zera09/Llamacontext | zera09 | 2025-04-12T01:52:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:52:01Z | ---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
library_name: transformers
model_name: Llamacontext
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Llamacontext
This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zera09/Llamacontext", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zeramarveenlyngkhoi/huggingface/runs/r1jomldr)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mikeogezi/data_1b_lora_64_sample_500 | mikeogezi | 2025-04-12T01:47:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:47:03Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bowilleatyou/7aec2619-7086-4f8d-968d-cf60138a92bf | bowilleatyou | 2025-04-12T01:46:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:45:21Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mikeogezi/data_3b_lora_32_sample_100 | mikeogezi | 2025-04-12T01:46:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:46:36Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lucas1026/aslora_adamwr_alttrue_lr0.8_a8_r8_s128_seed31 | lucas1026 | 2025-04-12T01:45:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"region:us"
]
| null | 2025-04-12T01:45:17Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.2.dev0 |
thomas-erhart/3B_duration_weighted_oversampled_masked_test | thomas-erhart | 2025-04-12T01:43:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"region:us"
]
| null | 2025-04-11T18:21:23Z | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- llama-factory
- lora
- unsloth
- generated_from_trainer
model-index:
- name: 3B_duration_weighted_oversampled_masked_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3B_duration_weighted_oversampled_masked_test
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) on the my_train_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0 |
celax/bert-finetuned-rte | celax | 2025-04-12T01:42:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-11T13:50:47Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-finetuned-rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-rte
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6649
- Accuracy: 0.6606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.827226177606625e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7014 | 1.0 | 39 | 0.6858 | 0.5307 |
| 0.6761 | 2.0 | 78 | 0.6603 | 0.6209 |
| 0.6071 | 3.0 | 117 | 0.6996 | 0.5848 |
| 0.5364 | 4.0 | 156 | 0.6649 | 0.6606 |
| 0.4927 | 5.0 | 195 | 0.6900 | 0.6462 |
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
logexp1/klue-roberta-small-cross-encoder | logexp1 | 2025-04-12T01:42:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-12T01:41:45Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mikeogezi/data_3b_lora_16_sample_950 | mikeogezi | 2025-04-12T01:41:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:41:24Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mikeogezi/data_1b_lora_64_sample_100 | mikeogezi | 2025-04-12T01:39:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:38:39Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
onkolahmet/Qwen2-0.5B-Instruct-SFT | onkolahmet | 2025-04-12T01:36:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-12T01:35:01Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: transformers
model_name: Qwen2-0.5B-Instruct-SFT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2-0.5B-Instruct-SFT
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="onkolahmet/Qwen2-0.5B-Instruct-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/onkoll-ahmet-technical-university-of-munich/huggingface/runs/ciki7bin)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
onkolahmet/tmp_trainer | onkolahmet | 2025-04-12T01:35:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-12T00:15:08Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: transformers
model_name: tmp_trainer
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for tmp_trainer
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="onkolahmet/tmp_trainer", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mikeogezi/data_1b_lora_32_sample_950 | mikeogezi | 2025-04-12T01:35:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:34:56Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yeok/RS-Faithful | yeok | 2025-04-12T01:30:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:30:50Z | ---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yeok
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lucas1026/aslora_Adamw_altfalse_lr0.0001_a8_r8_s8_seed31 | lucas1026 | 2025-04-12T01:30:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
]
| null | 2025-04-12T01:30:41Z | ---
base_model: meta-llama/Meta-Llama-3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.2.dev0 |
TongZheng1999/FL_1000_NL_TT_gemma-2-9b-it-star-mixed_direct-OP-final_v2_10-2-3Rounds-iter-1 | TongZheng1999 | 2025-04-12T01:29:49Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-12T01:10:05Z | ---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: FL_1000_NL_TT_gemma-2-9b-it-star-mixed_direct-OP-final_v2_10-2-3Rounds-iter-1
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for FL_1000_NL_TT_gemma-2-9b-it-star-mixed_direct-OP-final_v2_10-2-3Rounds-iter-1
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TongZheng1999/FL_1000_NL_TT_gemma-2-9b-it-star-mixed_direct-OP-final_v2_10-2-3Rounds-iter-1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/7qgqm4on)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.0
- Pytorch: 2.6.0
- Datasets: 3.3.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mehmetbayik/bert-base-uncased-finetuned-rte-run_1 | mehmetbayik | 2025-04-12T01:28:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-12T01:28:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deeponh/malayalam_llama_NORMAL_distil_8b_3b | deeponh | 2025-04-12T01:27:08Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T12:42:14Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
duyntnet/Hamanasu-15B-Instruct-imatrix-GGUF | duyntnet | 2025-04-12T01:23:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Hamanasu-15B-Instruct",
"text-generation",
"en",
"license:other",
"region:us",
"conversational"
]
| text-generation | 2025-04-11T23:43:07Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Hamanasu-15B-Instruct
---
Quantizations of https://huggingface.co/Delta-Vector/Hamanasu-15B-Instruct
### Open source inference clients/UIs
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
* [ollama](https://github.com/ollama/ollama)
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [jan](https://github.com/janhq/jan)
* [GPT4All](https://github.com/nomic-ai/gpt4all)
### Closed source inference clients/UIs
* [LM Studio](https://lmstudio.ai/)
* [Backyard AI](https://backyard.ai/)
* More will be added...
---
# From original readme
<i>After multiple days of training, I'm proud to showcase my very own Phi-4 Finetune, Pretrained on almost a billion tokens worth of Books from</i>
- `NewEden/Orion-LIT`
- `NewEden/Orion-Asstr-Stories-16K`
- `Mielikki/Erebus-87k`
<i>Then instruct tuned. This model shows great promise for roleplaying while keeping things fresh and new. (And most of, Unsloppy!)</i>
</div>
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(250px, 1fr)); gap: 20px; margin: 20px 0;">
<div style="border:2px solid #333; border-radius:10px; padding:20px; background: rgba(0,0,0,0.2);">
## 💰 Prompting
<i>A known quirk of the model is overly verbose responses when generation length is uncapped, Please cap your maximum output tokens to 100~ tokens above what you prefer</i>
<i>This model uses ChatML formatting</i>
```python
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
Take off your helmet.<|im_end|>
<|im_start|>No i shall not. This is the way.
```
</div>
|
mikeogezi/data_1b_lora_32_sample_500 | mikeogezi | 2025-04-12T01:23:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:22:50Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
boinavideo/link.video.18.boina.trending | boinavideo | 2025-04-12T01:19:44Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-12T01:19:36Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Boina)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Boina)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Boina) |
Boina-akaamua-kujikula-Video/Boina.alichoka.kula.chakula.akaamua.kujikula.Video.oficial.link | Boina-akaamua-kujikula-Video | 2025-04-12T01:18:17Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-12T01:17:55Z | [🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=Boina)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=Boina)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Boina) |
jekunz/smollm-360m-lora-fineweb-norwegian-bokmaal | jekunz | 2025-04-12T01:16:21Z | 13 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"conversational",
"no",
"dataset:HuggingFaceFW/fineweb-2",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:adapter:HuggingFaceTB/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
]
| text-generation | 2025-03-28T09:26:54Z | ---
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb-2
language:
- 'no'
base_model:
- HuggingFaceTB/SmolLM2-360M-Instruct
pipeline_tag: text-generation
library_name: peft
--- |
trashpanda-org/Qwen2.5-72B-Azalea-v0 | trashpanda-org | 2025-04-12T01:12:40Z | 610 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Columbidae/Qwen2.5-72B-Instruct",
"base_model:finetune:Columbidae/Qwen2.5-72B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T05:43:40Z | ---
base_model:
- Columbidae/Qwen2.5-72B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# trashpanda-org/Qwen2.5-72B-Azalea-v0

## Recommended settings
<p><b>Context/instruct template</b>: ChatML.</p>
<p><b>Samplers</b>: temperature at 0.9, min_p at 0.05, top_a at 0.3, TFS at 0.75, repetition_penalty at 1.03, DRY if you have access to it. Alternatively, top nsigma 1 with temp 1 worked fine during testing too.</p>
Master import for ST: [https://files.catbox.moe/b6nwbc.json](https://files.catbox.moe/b6nwbc.json)
## Thank you!
Big thanks to the folks in the trashpanda-org Discord server for testing and sending over some logs!
## Reviews
> Really liking the model, it's def up there with Snowdrop v0.
>
> It's able to handle side chars really well, even between rerolls, feeling like it's not just a fluke that side chars are integrated.
>
> Rerolls vary, which is good. Really loving the prose.
>
> Thinking kinda weird since it's often hallucinating giving itself its own direction, but the end result still good, so I suppose it's not really a problem.
>
> There's little to no slops, when there's some, it's really minor that I wouldn't really mind it.
>
> It's yappy at times (sometimes you'd need more than 1k output), but I'd say even when it's yapping, it's a good yap.
>
> Not as horny as in it'd jump on you, but it definitely teases. I'd say it's good, actually prefer this way.
>
> There's no positivity bias for sure, which is a 👍
>
> It's definitely smart, understanding my reply really well, at least in its final response, not sure wassup with some hallucination on the thinking as shown on the 4-5th images tho.
— Raihanbook
> It can definitely get filthy, but it's impersonating {{user}} every once in a while.
>
> Reasoning seems to work well? I'm currently just running off of the recommended sampler for Snowdrop and lowering the temp every few responses.
— Ema
> Too much "heaven and earth" slop, impersonation from time to time. Good with NSFW in general but rushes. Long answers with good narrative, V3-like vibe, a bit positivity bias maybe? Rushes events.
>
> Loved the fluff, sugar daddy smut was too soft. NSFW actions good but rushes to the end. Dead dove is unplayable. 6/10
— Carmenta
> The model is unhinged and horny (a positive for me), it suffers issues from Qwen's quirk of being prone to impersonation like most Qwen 72B models amd still have some slops here and there.
>
> However, that was under the circumstance of traditional samplers. idk, I haven't played with nsigma much, but I feel like it's refreshing. The prose improved so much and no impersonation across 10 swipes
>
> I'll say with nsigma this model is 10/10 for me. But I'll want to use it on Featherless, which doesn't support such parameters. I am going to test a bit more with traditional samplers and leave more feedback while it is still up.
>
> PS Tested this model after some claude/gemini log, so far no impersonation like my previous test did on a fresh chat
— OMGWTFBBQ
> I've noticed that just like any LLMs would it seems to have a certain habit or sometimes repeating what it said, or rerolling but still it is similar to the previous responses on lower temp which is about 0.7-0.8 but on high temps such as 1.18 or 1 when its creative it seems to struggle with the consistency, which is length and context of response, not sure if that has anything to do with temps however.
>
> As a non thinking model I am really impressed by its capabilities to generate responses that has high quality despite sometimes it started going onto its own merry way of repeating what it said (Not sure what the issue is but it does repeat pretty often even on high temp, not the full sentence but it'll be like probably the end part of a sentence will be the same as the first reroll?)
>
> It follow prompts pretty well so that's a pos too!
>
> It seems to not understand how to push the story onwards without the character leaving the scene if its response length was too long.
>
> There is certain times when it's tweaking a bit but hey, what LLM won't tweak right :thumbsmile:
>
> if Aza has a slightly better consistency in good quality responses I think it would be perfect, I enjoyed my roleplay with it otherwise!
— Sprout
> ok so i've tried a few different bots with Azalea, and it's pretty freaking good. it stays really true to the characters and their personalities. as for impersonating - i haven't had that issue so far. i feel like Azalea is actually pretty great
>
> from my further testing, it seems like it's either i get a perfect response within the first message, or i have to resend like 5 times to get another one of similar quality
— Shreksophoner
> The first few responses were really good, but after a few rerolls the llm does seem to struggle with following a set structure. Content-wise, it was decent, left me some room to advance the story. Creativity wasn't anything mind-blowing, but i still think it did a decent job. Prose was creative too.
>
> The quality of responses seems to be whats holding it back. Everything else was decent to good for me
— simon
> You could say its like a wild schizophrenic man with nsigma. the consistency of responses is random; some of his stories are really interesting while other stories are just bat shit insane, but overall hes fun to be around
>
> hes more boring on medication (no nsigma) so you convince him not to take it
— AIELO
> Can maintain subtlety about things from nudges. I like it. Reasoning, even if it's not Snowdrop, keeps track of secrets and what char/user knows, it's sick. Comes up with nice little details now and again that's not in the card. Speech patterns take hold, character portrayal is on point most of the time, dialogue is decent to good. Prose is generally better than I remember Snowdrop had at times, worse in others, situational.
>
> Without reasoning: It's fine, but narration seems to suffer compared to with-reasoning responses.
>
> Writes more diverse exposition for smut than any Marigold ever did, interesting. I've noticed it can be more horny for thinking responses.
>
> Drawbacks, however: tested it with a Spanish-speaking bot that Snowdrop does well with, and it's not interspersing Spanish in its responses like I'd expect it to. Other testers (and in my own testing), POV switch and user impersonation happens rarely. Tested with regular Qwen samplers for the most part - top nsigma didn't do well this time.
>
> Overall, did pretty well in my testing.
— Severian
## Some logs


























This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Columbidae/Qwen2.5-72B-Instruct](https://huggingface.co/Columbidae/Qwen2.5-72B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* ./stage1
* ./stage2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./stage2
parameters:
weight: 1
density: 1
- model: ./stage1
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Columbidae/Qwen2.5-72B-Instruct
parameters:
normalize: true
int8_mask: true
tokenizer_source: Columbidae/Qwen2.5-72B-Instruct
dtype: bfloat16
name: trashpanda-org/Qwen2.5-72B-Azalea-v0
```
|
deepManifold/qwen2-7b-instruct-trl-sft-ChartQA | deepManifold | 2025-04-12T01:11:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-11T23:47:39Z | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="deepManifold/qwen2-7b-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/deep-manifold/qwen2-7b-instruct-trl-sft-ChartQA/runs/xmy2ea7a)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mikeogezi/data_3b_lora_16_sample_500 | mikeogezi | 2025-04-12T01:09:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T01:09:13Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cparedes/ppo-Pyramids | cparedes | 2025-04-12T01:02:50Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2025-04-12T01:02:45Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: cparedes/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bowilleatyou/cb1f7c54-1204-40fd-9368-7a15a011bf36 | bowilleatyou | 2025-04-12T01:01:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-11T15:33:47Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mikeogezi/data_1b_lora_16_sample_500 | mikeogezi | 2025-04-12T00:58:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-12T00:58:29Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sleepdeprived3/Baptist-Christian-Bible-Expert-v2.0-22B-LoRA | sleepdeprived3 | 2025-04-12T00:50:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:adapter:mistralai/Mistral-Small-Instruct-2409",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-12T00:47:08Z | ---
library_name: peft
license: other
base_model: mistralai/Mistral-Small-Instruct-2409
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.1`
```yaml
base_model: mistralai/Mistral-Small-Instruct-2409
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
model_config:
trust_remote_code: true
tokenizer:
pad_token: "</s>"
padding_side: "right"
add_bos_token: true
add_eos_token: false
datasets:
- path: data/data.jsonl
type: chat_template
chat_template_strategy: tokenizer
field_messages: conversations
message_property_mappings:
role: role
content: content
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
load_in_4bit: true
adapter: qlora
lora_r: 64
lora_alpha: 32
lora_dropout: 0.1
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
bf16: true
flash_attention: true
gradient_checkpointing: true
deepspeed: deepspeed_configs/zero2.json
gradient_accumulation_steps: 4
micro_batch_size: 8
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 3e-6
warmup_ratio: 0.02
max_seq_length: 8192
pad_to_sequence_len: true
sample_packing: true
max_grad_norm: 1.0
output_dir: ./output
save_steps: 100
logging_steps: 10
save_safetensors: true
special_tokens:
pad_token: "</s>"
```
</details><br>
# output
This model is a fine-tuned version of [mistralai/Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 8
- num_epochs: 2.0
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
kaarthu2003/whisper-small-telugu_learning-rate-1e-3 | kaarthu2003 | 2025-04-12T00:48:49Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-04-11T19:42:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yeok/Llama-3.2-1B-Instruct-RS-Faithful | yeok | 2025-04-12T00:44:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-11T03:31:27Z | ---
library_name: transformers
tags:
- unsloth
- trl
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits