modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
chengpt/fortunetelling
|
chengpt
| 2025-02-26T16:21:41Z | 0 | 0 | null |
[
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T13:18:19Z |
---
license: apache-2.0
---
|
kas1/ai-ml-t-tes2-dftopcat-data-dsr1-1.5b
|
kas1
| 2025-02-26T16:21:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"Ayurveda",
"Doshas",
"Fine-Tuned Model",
"LoRA",
"OPT-1.3b",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T15:47:58Z |
---
library_name: transformers
tags:
- Ayurveda
- Doshas
- Fine-Tuned Model
- LoRA
- OPT-1.3b
---
# Model Card for `ai-ml-t-tes2-dftopcat-data-dsr1-1.5b`
This is a fine-tuned version of the `facebook/opt-1.3b` model using the **LoRA (Low-Rank Adaptation)** technique. The model has been trained on a dataset focused on Ayurveda and the concept of doshas (Vata, Pitta, Kapha). Compared to the previous model (`ai-ml-t-tes1-dftopcat-data-dsr1-1.5b`), this version uses a larger base model and improved training parameters to generate more coherent and informative responses about Ayurvedic principles and their role in promoting health.
---
## Model Details
### Model Description
This model is a fine-tuned adaptation of the `facebook/opt-1.3b` base model, optimized for generating explanations related to Ayurveda and doshas. It uses the **LoRA** technique to reduce computational costs while maintaining performance. The training data consists of instructional prompts and corresponding outputs that explain Ayurvedic concepts like doshic constitution, balance, and their influence on health.
Compared to the previous model (`facebook/opt-350m`), this version demonstrates significant improvements in coherence, reduced repetition, and fewer inaccuracies. However, it still struggles with depth and specificity, particularly in explaining Vata, Pitta, and Kapha doshas in detail.
- **Developed by:** kas1
- **Model type:** Causal Language Model (Fine-Tuned)
- **Language(s):** English
- **License:** [MIT License](https://opensource.org/licenses/MIT)
- **Finetuned from model:** [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b)
### Model Sources
- **Repository:** [kas1/ai-ml-t-tes2-dftopcat-data-dsr1-1.5b](https://huggingface.co/kas1/ai-ml-t-tes2-dftopcat-data-dsr1-1.5b)
- **Dataset:** [Abhaykoul/Ancient-Indian-Wisdom](https://huggingface.co/datasets/Abhaykoul/Ancient-Indian-Wisdom)
---
## Uses
### Direct Use
The model can be used to generate responses to questions about Ayurveda, particularly focusing on doshas and their role in health. It is suitable for educational purposes, answering FAQs, or providing introductory insights into Ayurvedic principles.
### Downstream Use
The model can be integrated into applications like chatbots, virtual assistants, or educational platforms that focus on alternative medicine and wellness.
### Out-of-Scope Use
The model is not designed for medical diagnosis, treatment recommendations, or generating content outside the scope of Ayurveda. Misuse or reliance on the model for critical health decisions is strongly discouraged.
---
## Bias, Risks, and Limitations
### Known Limitations
- While the model shows improvements over the previous version, it still occasionally generates repetitive or nonsensical phrases.
- Responses lack depth and specificity about Vata, Pitta, and Kapha doshas compared to expert-level explanations.
- The model sometimes introduces inaccuracies (e.g., misinterpreting doshas as "disease-causing elements") due to limitations in training data or fine-tuning.
### Improvements Over Previous Model
- **Reduced Repetition**: Adjustments to generation parameters (e.g., `repetition_penalty`) have significantly reduced redundant phrases.
- **Improved Coherence**: The use of a larger base model (`facebook/opt-1.3b`) has led to more structured and logical responses.
- **Fewer Inaccuracies**: The model avoids major errors (e.g., "doshas as hallucinations") seen in the previous version.
### Recommendations
- Use post-processing techniques to filter out irrelevant or inaccurate statements.
- Fine-tune the model further with more diverse and high-quality training data.
- Experiment with even larger base models (e.g., `facebook/opt-6.7b`) for improved performance.
---
## How to Get Started with the Model
To use this model, follow these steps:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
import torch
# Load the base model
base_model = AutoModelForCausalLM.from_pretrained(
"facebook/opt-1.3b", # Original base model
torch_dtype=torch.float16,
device_map="auto"
)
# Load the LoRA configuration and adapter
peft_config = PeftConfig.from_pretrained("kas1/ai-ml-t-tes2-dftopcat-data-dsr1-1.5b")
model = PeftModel.from_pretrained(base_model, "kas1/ai-ml-t-tes2-dftopcat-data-dsr1-1.5b")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("kas1/ai-ml-t-tes2-dftopcat-data-dsr1-1.5b")
tokenizer.pad_token = tokenizer.eos_token
# Generate text
def generate_text(prompt, max_new_tokens=500):
inputs = tokenizer(prompt, return_tensors="pt").to('cuda')
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=0.4,
top_k=25,
top_p=0.87,
repetition_penalty=1.3
)
return tokenizer.decode(output[0], skip_special_tokens=True)
# Test the model
prompt = "Ayurveda emphasizes the balance between doshas. How can understanding our doshic constitution promote better health?"
output = generate_text(prompt)
print(output)
|
shipjuls/Nball
|
shipjuls
| 2025-02-26T16:20:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T15:43:24Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Nball
---
# Nball
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Nball` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('shipjuls/Nball', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
genloop/smollm2_1.7B-instruct_news_headline_generation
|
genloop
| 2025-02-26T16:19:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T16:17:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vIDEO-Sophie-Rain-Spiderman-Updates/Sophie.Rain.Spiderman.Sophie.Rain.Spiderman.Video.Tutorial.Viral.Full.Video.Link
|
vIDEO-Sophie-Rain-Spiderman-Updates
| 2025-02-26T16:18:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-26T16:17:34Z |
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">βΊβΊβ
πΎπππΎπ ππππ ==βΊβΊ ππͺπ‘π‘ πππππ€οΈβ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">π΄βΊπππππ ππππ π==βΊβΊ ππ¨π°π§π₯π¨ππ ππ¨π°β¬οΈβ¬οΈβ</a></p>
|
hamdfdfd/chatti
|
hamdfdfd
| 2025-02-26T16:17:12Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T16:17:12Z |
---
license: apache-2.0
---
|
yssf-io/ppo-LunarLander-v2
|
yssf-io
| 2025-02-26T16:17:04Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-25T15:56:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.06 +/- 14.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
genloop/smollm2_360M-instruct_news_headline_generation
|
genloop
| 2025-02-26T16:16:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T16:15:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
koray6/convnext-tiny-224-finetuned-eurosat
|
koray6
| 2025-02-26T16:14:01Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnext-tiny-224",
"base_model:finetune:facebook/convnext-tiny-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-02-26T14:52:05Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/convnext-tiny-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3390
- Accuracy: 0.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2852 | 1.0 | 57 | 0.9943 | 0.8728 |
| 0.5203 | 2.0 | 114 | 0.4478 | 0.9327 |
| 0.3931 | 3.0 | 171 | 0.3390 | 0.9414 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.4.1+cu121
- Datasets 2.14.5
- Tokenizers 0.21.0
|
00K4M1/Q-Learning-FrozenLake-v1-4x4-no_slippery
|
00K4M1
| 2025-02-26T16:12:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implimentation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-26T16:11:25Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implimentation
model-index:
- name: Q-Learning-FrozenLake-v1-4x4-no_slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="00K4M1/Q-Learning-FrozenLake-v1-4x4-no_slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
lesso02/08f1c432-ff35-42aa-abf3-15ea4a95336c
|
lesso02
| 2025-02-26T16:11:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T15:41:33Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 08f1c432-ff35-42aa-abf3-15ea4a95336c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 40e9109629a4c483_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/40e9109629a4c483_train_data.json
type:
field_input: choices
field_instruction: task
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso02/08f1c432-ff35-42aa-abf3-15ea4a95336c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000202
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/40e9109629a4c483_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 20
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: bd5cade5-3208-4704-9a3c-2906840832ea
wandb_project: 02a
wandb_run: your_name
wandb_runid: bd5cade5-3208-4704-9a3c-2906840832ea
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 08f1c432-ff35-42aa-abf3-15ea4a95336c
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.6262 |
| 2.4821 | 0.0042 | 50 | 2.3840 |
| 2.5197 | 0.0085 | 100 | 2.3227 |
| 2.2534 | 0.0127 | 150 | 2.3190 |
| 2.3244 | 0.0169 | 200 | 2.2666 |
| 2.1998 | 0.0211 | 250 | 2.2560 |
| 2.3972 | 0.0254 | 300 | 2.2496 |
| 2.0891 | 0.0296 | 350 | 2.2445 |
| 2.2914 | 0.0338 | 400 | 2.2401 |
| 2.1728 | 0.0381 | 450 | 2.2382 |
| 2.0895 | 0.0423 | 500 | 2.2383 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
LandCruiser/Ardennes_7
|
LandCruiser
| 2025-02-26T16:11:13Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T16:03:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
genloop/DeepSeek-R1-Distill-Llama-8B-HSN-GRPO-2000-steps-adapter
|
genloop
| 2025-02-26T16:10:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T16:10:21Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** genloop
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lm-kit/qwen2-vl-2b-instruct-lmk
|
lm-kit
| 2025-02-26T16:10:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-01-09T15:33:03Z |
---
license: apache-2.0
---
Qwen2-VL-2B-Instruct
Original model: https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct
This repository contains the Qwen2-VL-2B-Instruct model stored in an .lmk file format, designed for inference with the LM-Kit SDK.
|
bomjara/ul_lama3
|
bomjara
| 2025-02-26T16:07:32Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:apache-2.0",
"region:us"
] | null | 2025-02-25T17:37:20Z |
---
license: apache-2.0
tags:
- unsloth
---
|
LandCruiser/Ardennes_5
|
LandCruiser
| 2025-02-26T16:07:17Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T16:03:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/Ardennes_3
|
LandCruiser
| 2025-02-26T16:07:14Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T16:03:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/Ardennes_6
|
LandCruiser
| 2025-02-26T16:07:02Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T16:03:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/Ardennes_4
|
LandCruiser
| 2025-02-26T16:06:53Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T16:03:04Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/Ardennes_2
|
LandCruiser
| 2025-02-26T16:06:25Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T16:03:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
itdainb/SeaLLMs-v3-1.5B-bnb-4bit
|
itdainb
| 2025-02-26T16:06:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-26T16:05:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ButchersBrain/TMNT
|
ButchersBrain
| 2025-02-26T16:05:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T15:51:50Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TMNT
---
# Tmnt
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TMNT` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ButchersBrain/TMNT', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
griko/age_reg_ann_ecapa_librosa_combined
|
griko
| 2025-02-26T16:04:34Z | 0 | 0 | null |
[
"joblib",
"ann",
"age-estimation",
"speaker-characteristics",
"speaker-recognition",
"audio-regression",
"voice-analysis",
"multilingual",
"dataset:voxceleb2",
"dataset:timit",
"arxiv:2502.17579",
"license:apache-2.0",
"region:us"
] | null | 2024-11-20T13:49:22Z |
---
language: multilingual
license: apache-2.0
datasets:
- voxceleb2
- timit
libraries:
- speechbrain
- librosa
tags:
- age-estimation
- speaker-characteristics
- speaker-recognition
- audio-regression
- voice-analysis
---
# Age Estimation Model
This model combines the SpeechBrain ECAPA-TDNN speaker embedding model with an ANN regressor to predict speaker age from audio input. The model uses ECAPA embeddings and Librosa acoustic features, trained on both VoxCeleb2 and TIMIT datasets.
## Model Performance Comparison
We provide multiple pre-trained models with different architectures and feature sets. Here's a comprehensive comparison of their performance:
| Model | Architecture | Features | Training Data | Test MAE | Best For |
|-------|-------------|----------|---------------|-----------|----------|
| VoxCeleb2 SVR (223) | SVR | ECAPA + Librosa (223-dim) | VoxCeleb2 | 7.88 years | Best performance on VoxCeleb2 |
| VoxCeleb2 SVR (192) | SVR | ECAPA only (192-dim) | VoxCeleb2 | 7.89 years | Lightweight deployment |
| TIMIT ANN (192) | ANN | ECAPA only (192-dim) | TIMIT | 4.95 years | Clean studio recordings |
| Combined ANN (223) | ANN | ECAPA + Librosa (223-dim) | VoxCeleb2 + TIMIT | 6.93 years | Best general performance |
You may find other models [here](https://huggingface.co/griko).
## Model Details
- Input: Audio file (will be converted to 16kHz, mono, single channel)
- Output: Predicted age in years (continuous value)
- Features:
- SpeechBrain ECAPA-TDNN embedding [192 features]
- Additional Librosa features [31 features]
- Regressor: Artificial Neural Network optimized through Optuna
- Performance:
- Combined test set: 6.93 years Mean Absolute Error (MAE)
## Features
1. SpeechBrain ECAPA-TDNN embeddings (192 dimensions)
2. Librosa acoustic features (31 dimensions):
- 13 MFCCs
- 13 Delta MFCCs
- Zero crossing rate
- Spectral centroid
- Spectral bandwidth
- Spectral contrast
- Spectral flatness
## Training Data
The model was trained on a combination of datasets:
- VoxCeleb2:
- YouTube interview recordings
- Age data from Wikidata and public sources
- Voice activity detection applied
- TIMIT:
- Studio-quality recordings
- Original age annotations
- All audio preprocessed to 16kHz, mono
## Installation
```bash
pip install git+https://github.com/griko/voice-age-regression.git#egg=voice-age-regressor[full]
```
## Usage
```python
from age_regressor import AgeRegressionPipeline
# Load the pipeline
regressor = AgeRegressionPipeline.from_pretrained(
"griko/age_reg_ann_ecapa_librosa_combined"
)
# Single file prediction
result = regressor("path/to/audio.wav")
print(f"Predicted age: {result[0]:.1f} years")
# Batch prediction
results = regressor(["audio1.wav", "audio2.wav"])
print(f"Predicted ages: {[f'{age:.1f}' for age in results]} years")
```
## Limitations
- Model was trained on a mix of YouTube interviews and studio recordings recordings
- Performance may vary on different audio qualities or recording conditions
- Age predictions are estimates and should not be used for medical or legal purposes
- Age estimations should be treated as approximate values, not exact measurements
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{koushnir2025vanpyvoiceanalysisframework,
title={VANPY: Voice Analysis Framework},
author={Gregory Koushnir and Michael Fire and Galit Fuhrmann Alpert and Dima Kagan},
year={2025},
eprint={2502.17579},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2502.17579},
}
```
|
LucaZilli/model-snowflake-m_20250226_153737_finalmodel
|
LucaZilli
| 2025-02-26T16:04:05Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:25310",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:Snowflake/snowflake-arctic-embed-m",
"base_model:finetune:Snowflake/snowflake-arctic-embed-m",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-26T16:03:26Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:25310
- loss:CosineSimilarityLoss
base_model: Snowflake/snowflake-arctic-embed-m
widget:
- source_sentence: encryption algorithms for mobile transactions
sentences:
- equipaggiamento per sport acquatici
- finanziamenti a lungo termine per privati
- encryption algorithms for mobile banking
- source_sentence: tecnologie di liofilizzazione per frutta e verdura
sentences:
- serbatoi di fermentazione in acciaio inox per cantine
- impianti di liofilizzazione per frutta e verdura
- medical cannulas
- source_sentence: servizi di installazione di cavi sottomarini
sentences:
- servizi di installazione di cavi sottomarini
- custom spinal fusion implants
- soluzioni disinfettanti per il settore sanitario
- source_sentence: antifouling paint for yachts
sentences:
- sistemi di ventilazione con controllo umiditΓ integrato
- robot per la movimentazione interna
- vernici per automobili
- source_sentence: materiali isolanti per sistemi radianti a soffitto
sentences:
- Produzione di contenuti per social media nel settore moda.
- privacy and data protection training
- materiali isolanti per edifici
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- cosine_accuracy
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: custom dataset
type: custom_dataset
metrics:
- type: pearson_cosine
value: 0.7497809373528005
name: Pearson Cosine
- type: spearman_cosine
value: 0.7616341455252776
name: Spearman Cosine
- task:
type: triplet
name: Triplet
dataset:
name: all nli dataset
type: all_nli_dataset
metrics:
- type: cosine_accuracy
value: 0.7858662605285645
name: Cosine Accuracy
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: stsbenchmark
type: stsbenchmark
metrics:
- type: pearson_cosine
value: 0.6751374371492788
name: Pearson Cosine
- type: spearman_cosine
value: 0.6961828350042979
name: Spearman Cosine
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision fc74610d18462d218e312aa986ec5c8a75a98152 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("LucaZilli/model-snowflake-m_20250226_153737_finalmodel")
# Run inference
sentences = [
'materiali isolanti per sistemi radianti a soffitto',
'materiali isolanti per edifici',
'privacy and data protection training',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `custom_dataset` and `stsbenchmark`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | custom_dataset | stsbenchmark |
|:--------------------|:---------------|:-------------|
| pearson_cosine | 0.7498 | 0.6751 |
| **spearman_cosine** | **0.7616** | **0.6962** |
#### Triplet
* Dataset: `all_nli_dataset`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.7859** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 25,310 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 13.32 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.06 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------|:-----------------|
| <code>ottimizzazione dei tempi di produzione per capi sartoriali di lusso</code> | <code>strumenti per l'ottimizzazione dei tempi di produzione</code> | <code>0.6</code> |
| <code>software di programmazione robotica per lucidatura</code> | <code>software gestionale generico</code> | <code>0.4</code> |
| <code>rete di sensori per l'analisi del suolo in tempo reale</code> | <code>software per gestione aziendale</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 3,164 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 13.61 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.39 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:------------------------------------------------------|:------------------------------------------------------------------------------|:-----------------|
| <code>ispezioni regolari per camion aziendali</code> | <code>ispezioni regolari per camion di consegna</code> | <code>1.0</code> |
| <code>blister packaging machines GMP compliant</code> | <code>food packaging machines</code> | <code>0.4</code> |
| <code>EMI shielding paints for electronics</code> | <code>Vernici per schermatura elettromagnetica dispositivi elettronici</code> | <code>0.8</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | custom_dataset_spearman_cosine | all_nli_dataset_cosine_accuracy | stsbenchmark_spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|:------------------------------:|:-------------------------------:|:----------------------------:|
| -1 | -1 | - | - | 0.7616 | 0.7859 | 0.6962 |
| 0.1264 | 200 | 0.0799 | 0.0379 | - | - | - |
| 0.2528 | 400 | 0.0349 | 0.0285 | - | - | - |
| 0.3793 | 600 | 0.0302 | 0.0266 | - | - | - |
| 0.5057 | 800 | 0.0288 | 0.0283 | - | - | - |
| 0.6321 | 1000 | 0.0274 | 0.0252 | - | - | - |
| 0.7585 | 1200 | 0.0259 | 0.0250 | - | - | - |
| 0.8850 | 1400 | 0.0251 | 0.0236 | - | - | - |
| 1.0114 | 1600 | 0.0218 | 0.0227 | - | - | - |
| 1.1378 | 1800 | 0.0166 | 0.0247 | - | - | - |
| 1.2642 | 2000 | 0.0158 | 0.0228 | - | - | - |
| 1.3906 | 2200 | 0.017 | 0.0221 | - | - | - |
| 1.5171 | 2400 | 0.0163 | 0.0223 | - | - | - |
| 1.6435 | 2600 | 0.0172 | 0.0229 | - | - | - |
| 1.7699 | 2800 | 0.0168 | 0.0210 | - | - | - |
| 1.8963 | 3000 | 0.0168 | 0.0211 | - | - | - |
| 2.0228 | 3200 | 0.015 | 0.0211 | - | - | - |
| 2.1492 | 3400 | 0.0099 | 0.0206 | - | - | - |
| 2.2756 | 3600 | 0.01 | 0.0218 | - | - | - |
| 2.4020 | 3800 | 0.0099 | 0.0208 | - | - | - |
| 2.5284 | 4000 | 0.0102 | 0.0200 | - | - | - |
| 2.6549 | 4200 | 0.0102 | 0.0206 | - | - | - |
| 2.7813 | 4400 | 0.0109 | 0.0198 | - | - | - |
| 2.9077 | 4600 | 0.0106 | 0.0196 | - | - | - |
| 3.0341 | 4800 | 0.0087 | 0.0199 | - | - | - |
| 3.1606 | 5000 | 0.0067 | 0.0194 | - | - | - |
| 3.2870 | 5200 | 0.0065 | 0.0194 | - | - | - |
| 3.4134 | 5400 | 0.0071 | 0.0193 | - | - | - |
| 3.5398 | 5600 | 0.0068 | 0.0195 | - | - | - |
| 3.6662 | 5800 | 0.0067 | 0.0196 | - | - | - |
| 3.7927 | 6000 | 0.0069 | 0.0197 | - | - | - |
| 3.9191 | 6200 | 0.007 | 0.0202 | - | - | - |
| 4.0455 | 6400 | 0.006 | 0.0190 | - | - | - |
| 4.1719 | 6600 | 0.0048 | 0.0192 | - | - | - |
| 4.2984 | 6800 | 0.0047 | 0.0192 | - | - | - |
| 4.4248 | 7000 | 0.0047 | 0.0193 | - | - | - |
| 4.5512 | 7200 | 0.0048 | 0.0191 | - | - | - |
| 4.6776 | 7400 | 0.0047 | 0.0190 | - | - | - |
| 4.8040 | 7600 | 0.0049 | 0.0190 | - | - | - |
| 4.9305 | 7800 | 0.0046 | 0.0190 | - | - | - |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
sobamchan/contriever-sentencetransformer
|
sobamchan
| 2025-02-26T16:04:02Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-26T16:02:27Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# nthakur/contriever-base-msmarco
This is a port of the [Contriever Model](https://huggingface.co/facebook/contriever) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nishimoto/contriever-sentencetransformer')
embeddings = model.encode(sentences)
print(embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 509, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [Contriever Model](https://github.com/facebookresearch/contriever).
|
cdtmc/llama-3_1-1B-imdb_seq_cls
|
cdtmc
| 2025-02-26T16:03:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T16:03:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TareksLab/UL3.3-Nemo-X80-BASE-70B
|
TareksLab
| 2025-02-26T16:03:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:merge:Sao10K/L3-70B-Euryale-v2.1",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T15:19:10Z |
---
base_model:
- Sao10K/L3-70B-Euryale-v2.1
- nbeerbower/Llama-3.1-Nemotron-lorablated-70B
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sao10K/L3-70B-Euryale-v2.1
- model: SicariusSicariiStuff/Negative_LLAMA_70B
merge_method: sce
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
select_topk: 0.80
dtype: float32
out_dtype: bfloat16
tokenizer:
source: union
```
|
griko/age_reg_ann_ecapa_timit
|
griko
| 2025-02-26T16:02:36Z | 0 | 0 | null |
[
"joblib",
"ann",
"age-estimation",
"speaker-characteristics",
"speaker-recognition",
"audio-regression",
"voice-analysis",
"multilingual",
"dataset:timit",
"arxiv:2502.17579",
"license:apache-2.0",
"region:us"
] | null | 2024-11-20T13:46:49Z |
---
language: multilingual
license: apache-2.0
datasets:
- timit
libraries:
- speechbrain
tags:
- age-estimation
- speaker-characteristics
- speaker-recognition
- audio-regression
- voice-analysis
---
# Age Estimation Model
This model combines the SpeechBrain ECAPA-TDNN speaker embedding model with an ANN regressor to predict speaker age from audio input. The model was trained on the TIMIT dataset.
## Model Performance Comparison
We provide multiple pre-trained models with different architectures and feature sets. Here's a comprehensive comparison of their performance:
| Model | Architecture | Features | Training Data | Test MAE | Best For |
|-------|-------------|----------|---------------|-----------|----------|
| VoxCeleb2 SVR (223) | SVR | ECAPA + Librosa (223-dim) | VoxCeleb2 | 7.88 years | Best performance on VoxCeleb2 |
| VoxCeleb2 SVR (192) | SVR | ECAPA only (192-dim) | VoxCeleb2 | 7.89 years | Lightweight deployment |
| TIMIT ANN (192) | ANN | ECAPA only (192-dim) | TIMIT | 4.95 years | Clean studio recordings |
| Combined ANN (223) | ANN | ECAPA + Librosa (223-dim) | VoxCeleb2 + TIMIT | 6.93 years | Best general performance |
You may find other models [here](https://huggingface.co/griko).
## Model Details
- Input: Audio file (will be converted to 16kHz, mono, single channel)
- Output: Predicted age in years (continuous value)
- Features: SpeechBrain ECAPA-TDNN embedding [192 features]
- Regressor: Artificial Neural Network optimized through Optuna
- Performance:
- TIMIT test set: 4.95 years Mean Absolute Error (MAE)
## Features
1. SpeechBrain ECAPA-TDNN embeddings (192 dimensions)
## Training Data
The model was trained on the TIMIT dataset:
- High-quality studio recordings
- Single channel, 16kHz sampling rate
- Carefully controlled recording conditions
- Age annotations provided in the original dataset
## Installation
```bash
pip install git+https://github.com/griko/voice-age-regression.git#egg=voice-age-regressor[ann-ecapa-timit]
```
## Usage
```python
from age_regressor import AgeRegressionPipeline
# Load the pipeline
regressor = AgeRegressionPipeline.from_pretrained(
"griko/age_reg_ann_ecapa_timit"
)
# Single file prediction
result = regressor("path/to/audio.wav")
print(f"Predicted age: {result[0]:.1f} years")
# Batch prediction
results = regressor(["audio1.wav", "audio2.wav"])
print(f"Predicted ages: {[f'{age:.1f}' for age in results]} years")
```
## Limitations
- Model was trained on carefully controlled studio recordings recordings
- Performance may vary on different audio qualities or recording conditions
- Age predictions are estimates and should not be used for medical or legal purposes
- Age estimations should be treated as approximate values, not exact measurements
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{koushnir2025vanpyvoiceanalysisframework,
title={VANPY: Voice Analysis Framework},
author={Gregory Koushnir and Michael Fire and Galit Fuhrmann Alpert and Dima Kagan},
year={2025},
eprint={2502.17579},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2502.17579},
}
```
|
VPTQ-community/Mistral-Large-Instruct-2407-v8-k65536-65536-woft
|
VPTQ-community
| 2025-02-26T16:01:51Z | 32 | 2 | null |
[
"safetensors",
"llama",
"VPTQ",
"Quantized",
"Quantization",
"arxiv:2409.17066",
"base_model:mistralai/Mistral-Large-Instruct-2407",
"base_model:quantized:mistralai/Mistral-Large-Instruct-2407",
"license:other",
"vptq",
"region:us"
] | null | 2024-10-18T05:32:59Z |
---
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
base_model:
- mistralai/Mistral-Large-Instruct-2407
base_model_relation: quantized
tags:
- VPTQ
- Quantized
- Quantization
---
**Disclaimer**:
The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066)
The model itself is sourced from a community release.
It is intended only for experimental purposes.
Users are responsible for any consequences arising from the use of this model.
**Note**:
The PPL test results are for reference only and were collected using GPTQ testing script.
```json
{
"ctx_2048": {
"wikitext2": 2.858274459838867,
"c4": 5.985574722290039,
"c4-new": 6.604180812835693
},
"ctx_4096": {
"wikitext2": 2.7024664878845215,
"c4": 5.569791793823242,
"c4-new": 6.241445064544678
},
"ctx_8192": {}
}
```
|
griko/age_reg_svr_ecapa_voxceleb2
|
griko
| 2025-02-26T16:01:12Z | 0 | 0 | null |
[
"joblib",
"svr",
"age-estimation",
"speaker-characteristics",
"speaker-recognition",
"audio-regression",
"voice-analysis",
"multilingual",
"dataset:voxceleb2",
"arxiv:2502.17579",
"license:apache-2.0",
"region:us"
] | null | 2024-11-20T13:37:22Z |
---
language: multilingual
license: apache-2.0
datasets:
- voxceleb2
libraries:
- speechbrain
tags:
- age-estimation
- speaker-characteristics
- speaker-recognition
- audio-regression
- voice-analysis
---
# Age Estimation Model
This model combines the SpeechBrain ECAPA-TDNN speaker embedding model with an SVR regressor to predict speaker age from audio input. The model was trained on the VoxCeleb2 dataset.
## Model Performance Comparison
We provide multiple pre-trained models with different architectures and feature sets. Here's a comprehensive comparison of their performance:
| Model | Architecture | Features | Training Data | Test MAE | Best For |
|-------|-------------|----------|---------------|-----------|----------|
| VoxCeleb2 SVR (223) | SVR | ECAPA + Librosa (223-dim) | VoxCeleb2 | 7.88 years | Best performance on VoxCeleb2 |
| VoxCeleb2 SVR (192) | SVR | ECAPA only (192-dim) | VoxCeleb2 | 7.89 years | Lightweight deployment |
| TIMIT ANN (192) | ANN | ECAPA only (192-dim) | TIMIT | 4.95 years | Clean studio recordings |
| Combined ANN (223) | ANN | ECAPA + Librosa (223-dim) | VoxCeleb2 + TIMIT | 6.93 years | Best general performance |
You may find other models [here](https://huggingface.co/griko).
## Model Details
- Input: Audio file (will be converted to 16kHz, mono, single channel)
- Output: Predicted age in years (continuous value)
- Features: SpeechBrain ECAPA-TDNN embedding [192 features]
- Regressor: Support Vector Regression optimized through Optuna
- Performance:
- VoxCeleb2 test set: 7.89 years Mean Absolute Error (MAE)
## Features
1. SpeechBrain ECAPA-TDNN embeddings (192 dimensions)
## Training Data
The model was trained on the VoxCeleb2 dataset:
- Audio preprocessing:
- Converted to WAV format, single channel, 16kHz sampling rate
- Applied SileroVAD for voice activity detection, taking the first voiced segment
- Age data was collected from Wikidata and public sources
## Installation
```bash
pip install git+https://github.com/griko/voice-age-regression.git#egg=voice-age-regressor[svr-ecapa-voxceleb2]
```
## Usage
```python
from age_regressor import AgeRegressionPipeline
# Load the pipeline
regressor = AgeRegressionPipeline.from_pretrained(
"griko/age_reg_svr_ecapa_voxceleb2"
)
# Single file prediction
result = regressor("path/to/audio.wav")
print(f"Predicted age: {result[0]:.1f} years")
# Batch prediction
results = regressor(["audio1.wav", "audio2.wav"])
print(f"Predicted ages: {[f'{age:.1f}' for age in results]} years")
```
## Limitations
- Model was trained on celebrity voices from YouTube interviews recordings
- Performance may vary on different audio qualities or recording conditions
- Age predictions are estimates and should not be used for medical or legal purposes
- Age estimations should be treated as approximate values, not exact measurements
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{koushnir2025vanpyvoiceanalysisframework,
title={VANPY: Voice Analysis Framework},
author={Gregory Koushnir and Michael Fire and Galit Fuhrmann Alpert and Dima Kagan},
year={2025},
eprint={2502.17579},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2502.17579},
}
```
|
babysharkdododo/gliner-multi-entities
|
babysharkdododo
| 2025-02-26T16:00:30Z | 0 | 0 | null |
[
"pytorch",
"ms",
"en",
"dataset:Generated.",
"base_model:urchade/gliner_multi-v2.1",
"base_model:finetune:urchade/gliner_multi-v2.1",
"region:us"
] | null | 2025-02-23T14:37:42Z |
---
language:
- ms
- en
base_model:
- urchade/gliner_multi-v2.1
datasets:
- Generated.
---
## Citation [optional]
@inproceedings{zaratiana-etal-2024-gliner,
title = "{GL}i{NER}: Generalist Model for Named Entity Recognition using Bidirectional Transformer",
author = "Zaratiana, Urchade and
Tomeh, Nadi and
Holat, Pierre and
Charnois, Thierry",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.300",
doi = "10.18653/v1/2024.naacl-long.300",
pages = "5364--5376",
abstract = "Named Entity Recognition (NER) is essential in various Natural Language Processing (NLP) applications. Traditional NER models are effective but limited to a set of predefined entity types. In contrast, Large Language Models (LLMs) can extract arbitrary entities through natural language instructions, offering greater flexibility. However, their size and cost, particularly for those accessed via APIs like ChatGPT, make them impractical in resource-limited scenarios. In this paper, we introduce a compact NER model trained to identify any type of entity. Leveraging a bidirectional transformer encoder, our model, GLiNER, facilitates parallel entity extraction, an advantage over the slow sequential token generation of LLMs. Through comprehensive testing, GLiNER demonstrate strong performance, outperforming both ChatGPT and fine-tuned LLMs in zero-shot evaluations on various NER benchmarks.",
}
|
imageomics/butterfly_detection_yolo
|
imageomics
| 2025-02-26T16:00:20Z | 0 | 0 | null |
[
"biology",
"CV",
"images",
"animals",
"lepidoptera",
"butterflies",
"detection",
"heliconius",
"forewings",
"hindwings",
"separated wings",
"full body",
"butterfly",
"RGB",
"ruler",
"whitebalance",
"label",
"colorchecker",
"en",
"dataset:imageomics/Heliconius-Collection_Cambridge-Butterfly",
"dataset:imageomics/STRI-Samples",
"license:mit",
"region:us"
] | null | 2024-08-05T14:58:26Z |
---
license: mit
language:
- en
tags:
- biology
- CV
- images
- animals
- lepidoptera
- butterflies
- detection
- heliconius
- forewings
- hindwings
- separated wings
- full body
- butterfly
- RGB
- ruler
- whitebalance
- label
- colorchecker
datasets:
- imageomics/Heliconius-Collection_Cambridge-Butterfly
- imageomics/STRI-Samples
---
## Model Card for butterfly_detection_yolo
This model takes in images of butterflies as photographed for museum collections and detects butterfly components (L/R forewings, L/R hindwings and body) as well as color checkers and metadata labels.
The detection model described here is used in the repository https://github.com/Imageomics/wing-segmentation to detect components and use Meta's Segment-Anything (SAM) model for segmentation of components.
## Model Details
yolo_detection_8m_shear_10.0_scale_0.5_translate_0.1_fliplr_0.0_best.pt is the butterfly detection model.
The yolo v8 detection model was trained on a dataset of 800 total images from the [Heliconius Collection-Cambridge Butterfly](imageomics/Heliconius-Collection_Cambridge-Butterfly), OM_STRI, and Monteiro datasets. The model uses the pretrained yolov8m.pt model.
## Model Description
The model is responsible for taking an input image (RGB) and generating bounding boxes for all classes below that are found in the image. Data augmentations applied during training include shear (10.0), scale (0.5), and translate (0.1). The model was trained for 50 epochs with an image size of 256. Note that despite defining an image size of 256, the normalized masks predicted by yolo can be rescaled to the original image size.
### Segmentation Classes
[`pixel class`] corresponding category
- [0] background
- [1] right_forewing
- [2] left_forewing
- [3] right_hindwing
- [4] left_hindwing
- [5] ruler
- [6] white_balance
- [7] label
- [8] color_card
- [9] body
### Details
model.train(data=YAML,
imgsz=256,
epochs=50,
batch=16,
device=DEVICE,
optimizer='auto',
verbose=True,
val=True,
shear=10.0,
scale=0.5,
translate=0.1,
fliplr = 0.0
)
## Metrics
Class Images Instances Box(P R mAP50 mAP50-95)
all 64 358 0.979 0.887 0.919 0.877
background 64 3 1 0 0.315 0.169
right_forewing 64 58 0.995 0.983 0.986 0.977
left_forewing 64 51 0.975 1 0.985 0.982
right_hindwing 64 59 0.997 0.966 0.993 0.977
left_hindwing 64 50 0.975 1 0.993 0.98
ruler 64 31 0.951 1 0.995 0.952
white_balance 64 18 0.984 1 0.995 0.995
label 64 50 0.996 1 0.995 0.935
color_card 64 24 0.988 1 0.995 0.992
body 64 14 0.928 0.921 0.939 0.815
**Developed by:** Michelle Ramirez
## How to Get Started with the Model
To view applications of how to load in the model file and predict masks on images, please refer to [this github repository](https://github.com/Imageomics/wing-segmentation)
## Citation
**BibTeX:**
```
@software{Ramirez_Lepidoptera_Wing_Segmentation_2024,
author = {Ramirez, Michelle},
doi = {10.5281/zenodo.10869579},
month = mar,
title = {{Lepidoptera Wing Segmentation}},
url = {https://github.com/Imageomics/wing-segmentation},
version = {1.0.0},
year = {2024}
}
```
**APA:**
Ramirez, M. (2024). Lepidoptera Wing Segmentation (Version 1.0.0) [Computer software]. https://doi.org/10.5281/zenodo.10869579
## Acknowledgements
The [Imageomics Institute](https://imageomics.org) is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under [Award #2118240](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2118240) (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
mradermacher/deepthought-8b-abliterated-i1-GGUF
|
mradermacher
| 2025-02-26T16:00:06Z | 471 | 1 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"en",
"base_model:huihui-ai/deepthought-8b-abliterated",
"base_model:quantized:huihui-ai/deepthought-8b-abliterated",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-24T17:26:47Z |
---
base_model: huihui-ai/deepthought-8b-abliterated
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/huihui-ai/deepthought-8b-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/deepthought-8b-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/deepthought-8b-abliterated-i1-GGUF/resolve/main/deepthought-8b-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TobiGeth/tg_user_302351629_lora_1740584826
|
TobiGeth
| 2025-02-26T15:58:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T15:58:21Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: USER_302351629_1740584826
---
# Tg_User_302351629_Lora_1740584826
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `USER_302351629_1740584826` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TobiGeth/tg_user_302351629_lora_1740584826', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
meantoffsas/my_style_LoRa
|
meantoffsas
| 2025-02-26T15:55:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-02-26T15:55:40Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in CHERKASHIN style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - meantoffsas/my_style_LoRa
<Gallery />
## Model description
These are meantoffsas/my_style_LoRa LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in CHERKASHIN style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](meantoffsas/my_style_LoRa/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
mradermacher/mergekit-slerp-xlblwaw-i1-GGUF
|
mradermacher
| 2025-02-26T15:55:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/mergekit-slerp-xlblwaw",
"base_model:quantized:mergekit-community/mergekit-slerp-xlblwaw",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-26T14:09:25Z |
---
base_model: mergekit-community/mergekit-slerp-xlblwaw
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mergekit-community/mergekit-slerp-xlblwaw
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF/resolve/main/mergekit-slerp-xlblwaw.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
AdAstraAbyssoque/Qwen2.5-1.5B-Open-R1-GRPO-MCP500-0
|
AdAstraAbyssoque
| 2025-02-26T15:54:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:AdAstraAbyssoque/MCP500_esay",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T15:38:30Z |
---
datasets: AdAstraAbyssoque/MCP500_esay
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-GRPO-MCP500-0
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-GRPO-MCP500-0
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [AdAstraAbyssoque/MCP500_esay](https://huggingface.co/datasets/AdAstraAbyssoque/MCP500_esay) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AdAstraAbyssoque/Qwen2.5-1.5B-Open-R1-GRPO-MCP500-0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bowen_liu-hkust/huggingface/runs/wqyffald)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/alpaca-13b-i1-GGUF
|
mradermacher
| 2025-02-26T15:53:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:chavinlo/alpaca-13b",
"base_model:quantized:chavinlo/alpaca-13b",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-02-26T08:57:30Z |
---
base_model: chavinlo/alpaca-13b
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/chavinlo/alpaca-13b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/alpaca-13b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q4_1.gguf) | i1-Q4_1 | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/alpaca-13b-i1-GGUF/resolve/main/alpaca-13b.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Horizon6957/DeepSeek-myth-large-qna-cot
|
Horizon6957
| 2025-02-26T15:53:03Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T15:51:41Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wrbit01/dt
|
wrbit01
| 2025-02-26T15:52:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-02-26T15:52:31Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
DNTRMP donald trump, 75 years old, blonde hair, disheveled hairstyle,
light-colored eyes, frowning expression, displeased emotion, large head in
proportion to body, exaggerated facial features, slouched posture, arms
hanging loosely, wearing a formal black suit, white shirt, bright red tie,
small lapel pin, cartoonish style, high contrast black and white shading,
neutral lighting, average build, white background, political caricature,
XKCD style
output:
url: images/a_photo_of_DNTRMP(5).png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: dt
---
# dt
<Gallery />
## Trigger words
You should use `dt` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/wrbit01/dt/tree/main) them in the Files & versions tab.
|
Nexesenex/Llama_3.2_1b_Odyssea_V1.01-GGUF
|
Nexesenex
| 2025-02-26T15:52:14Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Nexesenex/Llama_3.2_1b_Odyssea_V1.01",
"base_model:quantized:Nexesenex/Llama_3.2_1b_Odyssea_V1.01",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T15:50:59Z |
---
base_model: Nexesenex/Llama_3.2_1b_Odyssea_V1.01
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Nexesenex/Llama_3.2_1b_Odyssea_V1.01-GGUF
IMPORTANT : These models are quantized with IK_Llama.cpp, not Llama.cpp
This model was converted to GGUF format from [`Nexesenex/Llama_3.2_1b_Odyssea_V1.01`](https://huggingface.co/Nexesenex/Llama_3.2_1b_Odyssea_V1.01) using llama.cpp's fork IK Llama via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nexesenex/Llama_3.2_1b_Odyssea_V1.01) for more details on the model.
## Use with llama.cpp (I never tested that way with IK_Llama)
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nexesenex/Llama_3.2_1b_Odyssea_V1.01-GGUF --hf-file llama_3.2_1b_odyssea_v1.01-bf16.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nexesenex/Llama_3.2_1b_Odyssea_V1.01-GGUF --hf-file llama_3.2_1b_odyssea_v1.01-bf16.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. -> necessary to use Croco.
Step 1: Clone llama.cpp from GitHub. -> necessary to use Croco.
```
git clone https://github.com/Nexesenex/ik_llama.cpp.nxs
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd ik_llama.cpp.nxs && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nexesenex/Llama_3.2_1b_Odyssea_V1.01-GGUF --hf-file llama_3.2_1b_odyssea_v1.01-bf16.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nexesenex/Llama_3.2_1b_Odyssea_V1.01-GGUF --hf-file llama_3.2_1b_odyssea_v1.01-bf16.gguf -c 2048
```
|
mradermacher/mergekit-slerp-xlblwaw-GGUF
|
mradermacher
| 2025-02-26T15:51:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/mergekit-slerp-xlblwaw",
"base_model:quantized:mergekit-community/mergekit-slerp-xlblwaw",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T13:46:25Z |
---
base_model: mergekit-community/mergekit-slerp-xlblwaw
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/mergekit-slerp-xlblwaw
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mergekit-slerp-xlblwaw-GGUF/resolve/main/mergekit-slerp-xlblwaw.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
francescosabbarese/ppo-CartPole-v1
|
francescosabbarese
| 2025-02-26T15:50:43Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-26T15:44:06Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 461.90 +/- 51.89
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'PPO'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 200000
'learning_rate': 0.0001
'num_envs': 8
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.98
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'francescosabbarese/ppo-CartPole-v1'
'batch_size': 1024
'minibatch_size': 256}
```
|
samoline/654dc279-936a-41d5-85a9-7be2612edd80
|
samoline
| 2025-02-26T15:50:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T15:48:34Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 654dc279-936a-41d5-85a9-7be2612edd80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 77e3cf084fba86c3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/77e3cf084fba86c3_train_data.json
type:
field_input: problem_ko
field_instruction: problem
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/654dc279-936a-41d5-85a9-7be2612edd80
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/77e3cf084fba86c3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 160dd077-f773-4651-9b50-8dad59f1e201
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 160dd077-f773-4651-9b50-8dad59f1e201
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 654dc279-936a-41d5-85a9-7be2612edd80
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0001 | 2 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
qing-yao/long_first_headfinal_seed-21_1e-3
|
qing-yao
| 2025-02-26T15:50:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-24T16:22:21Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: long_first_headfinal_seed-21_1e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long_first_headfinal_seed-21_1e-3
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0807
- Accuracy: 0.2038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 21
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 6.1412 | 0.9994 | 1470 | 5.5240 | 0.1764 |
| 4.5259 | 1.9992 | 2940 | 5.4067 | 0.1823 |
| 3.8908 | 2.9991 | 4410 | 5.3111 | 0.1857 |
| 3.7115 | 3.9996 | 5881 | 5.2018 | 0.1937 |
| 3.4863 | 4.9994 | 7351 | 5.1925 | 0.1938 |
| 3.4079 | 5.9992 | 8821 | 5.1520 | 0.1973 |
| 3.3056 | 6.9991 | 10291 | 5.1326 | 0.1999 |
| 3.258 | 7.9996 | 11762 | 5.1119 | 0.1997 |
| 3.2065 | 8.9994 | 13232 | 5.1225 | 0.2009 |
| 3.1699 | 9.9992 | 14702 | 5.1300 | 0.1987 |
| 3.1451 | 10.9991 | 16172 | 5.0815 | 0.2020 |
| 3.1079 | 11.9996 | 17643 | 5.1214 | 0.2012 |
| 3.1043 | 12.9994 | 19113 | 5.0818 | 0.2012 |
| 3.0668 | 13.9992 | 20583 | 5.1290 | 0.2022 |
| 3.0777 | 14.9991 | 22053 | 5.1106 | 0.1996 |
| 3.039 | 15.9996 | 23524 | 5.1058 | 0.2006 |
| 3.0432 | 16.9994 | 24994 | 5.1083 | 0.2036 |
| 3.0188 | 17.9992 | 26464 | 5.1309 | 0.2016 |
| 3.0246 | 18.9991 | 27934 | 5.1190 | 0.1996 |
| 3.0115 | 19.9962 | 29400 | 5.0807 | 0.2038 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.0
|
Hlc3058212270/DeepSeek-R1-Medical-COT-Tiny-1
|
Hlc3058212270
| 2025-02-26T15:49:35Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T15:13:42Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Hlc3058212270
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
simnJS/autotrain-fxp6j-p5s8i
|
simnJS
| 2025-02-26T15:49:22Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"starcoder2",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:bigcode/starcoder2-3b",
"base_model:finetune:bigcode/starcoder2-3b",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T15:23:33Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: bigcode/starcoder2-3b
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
nikita-nrg/llama11b_5epoch_length_merged_16bit
|
nikita-nrg
| 2025-02-26T15:46:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-02-26T15:37:25Z |
---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nikita-nrg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrunaAI/qnguyen3-nanoLLaVA-1.5-bnb-4bit-smashed
|
PrunaAI
| 2025-02-26T15:46:38Z | 0 | 0 | null |
[
"safetensors",
"llava-qwen2",
"pruna-ai",
"custom_code",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-26T15:45:48Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/qnguyen3-nanoLLaVA-1.5-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
abhishekkuber/step1_encoder_en_anchor_seq_cf
|
abhishekkuber
| 2025-02-26T15:44:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-26T15:44:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
krgl/Llama-Primus-Merged-gguf
|
krgl
| 2025-02-26T15:43:34Z | 0 | 0 | null |
[
"gguf",
"base_model:trendmicro-ailab/Llama-Primus-Merged",
"base_model:quantized:trendmicro-ailab/Llama-Primus-Merged",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T15:23:54Z |
---
license: mit
base_model:
- trendmicro-ailab/Llama-Primus-Merged
---
## This is a 8Bit Quantized Model of https://huggingface.co/trendmicro-ailab/Llama-Primus-Merged from trendmicro in GGUF format
|
iFaz/whisper-SER-base-v7
|
iFaz
| 2025-02-26T15:43:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:iFaz/Whisper_Compatible_SER_benchmark",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-26T04:24:21Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- iFaz/Whisper_Compatible_SER_benchmark
metrics:
- wer
model-index:
- name: whisper-SER-base-v7(skip_special_tokens=True during and lr = 1e-05 steps =
12k ,warmup = 500)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Whisper_Compatible_SER_benchmark + enhanced_facebook_voxpopulik_16k_Whisper_Compatible
type: iFaz/Whisper_Compatible_SER_benchmark
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 56.95732838589982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-SER-base-v7(skip_special_tokens=True during and lr = 1e-05 steps = 12k ,warmup = 500)
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Whisper_Compatible_SER_benchmark + enhanced_facebook_voxpopulik_16k_Whisper_Compatible dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0978
- Wer: 56.9573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 12000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.3141 | 0.5510 | 1000 | 0.3218 | 42.8881 |
| 0.1626 | 1.1019 | 2000 | 0.2021 | 58.5652 |
| 0.1553 | 1.6529 | 3000 | 0.1462 | 87.1676 |
| 0.1091 | 2.2039 | 4000 | 0.1199 | 63.8528 |
| 0.1069 | 2.7548 | 5000 | 0.1027 | 63.3271 |
| 0.042 | 3.3058 | 6000 | 0.0958 | 66.8831 |
| 0.0434 | 3.8567 | 7000 | 0.0935 | 77.2418 |
| 0.0254 | 4.4077 | 8000 | 0.0926 | 64.4712 |
| 0.0265 | 4.9587 | 9000 | 0.0939 | 59.9876 |
| 0.0136 | 5.5096 | 10000 | 0.0955 | 58.2870 |
| 0.009 | 6.0606 | 11000 | 0.0985 | 62.9561 |
| 0.0067 | 6.6116 | 12000 | 0.0978 | 56.9573 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
|
grozmart1/MistralMix-v0.1-0.2
|
grozmart1
| 2025-02-26T15:43:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-26T15:35:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TobiGeth/tg_user_450548031_lora_1740583492
|
TobiGeth
| 2025-02-26T15:36:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T15:36:23Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: USER_450548031_1740583492
---
# Tg_User_450548031_Lora_1740583492
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `USER_450548031_1740583492` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TobiGeth/tg_user_450548031_lora_1740583492', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
rowidamontaser/Qwen2.5-3B-Instruct-peft-v1
|
rowidamontaser
| 2025-02-26T15:34:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T15:16:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yoongd/bert-base-nsmc
|
yoongd
| 2025-02-26T15:34:25Z | 0 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-02-26T15:34:00Z |
---
library_name: transformers
tags:
- generated_from_keras_callback
model-index:
- name: bert-base-nsmc
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-base-nsmc
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.48.3
- TensorFlow 2.18.0
- Tokenizers 0.21.0
|
aniket-meta/Llama3.2-1b-shuttlesupport
|
aniket-meta
| 2025-02-26T15:34:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T15:33:18Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aniket-meta
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Roy124/Roy
|
Roy124
| 2025-02-26T15:32:42Z | 0 | 0 |
asteroid
|
[
"asteroid",
"ae",
"dataset:open-r1/OpenR1-Math-220k",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-V3",
"base_model:finetune:deepseek-ai/DeepSeek-V3",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-02-26T15:20:14Z |
---
license: bigcode-openrail-m
datasets:
- open-r1/OpenR1-Math-220k
language:
- ae
metrics:
- brier_score
base_model:
- deepseek-ai/DeepSeek-V3
new_version: deepseek-ai/DeepSeek-V3
library_name: asteroid
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Horizon6957/DeepSeek-bio-vlarge-qna-cot-final
|
Horizon6957
| 2025-02-26T15:32:15Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T15:30:37Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
irishprancer/4d743749-bb70-44b7-a93d-10f560ff4b30
|
irishprancer
| 2025-02-26T15:31:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T13:09:08Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hassan0191/Wallstreetdiggers
|
Hassan0191
| 2025-02-26T15:30:19Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T15:30:19Z |
---
license: apache-2.0
---
|
afpe/afpe
|
afpe
| 2025-02-26T15:29:28Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-02-26T15:16:49Z |
---
license: mit
---
# Models for Principled Positional Encodings for Medical Imaging
Models are saved in folders named after dataset and Positional Encoding method.
|
Gyimah3/whisper-small-finetuned
|
Gyimah3
| 2025-02-26T15:29:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"whisper",
"generated_from_trainer",
"dataset:common_voice_16_1",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | null | 2025-02-25T21:53:55Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_16_1
library_name: peft
model-index:
- name: whisper-small-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/evelyngyim1111-inlaks/huggingface/runs/bb86w4fx)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/evelyngyim1111-inlaks/huggingface/runs/bb86w4fx)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/evelyngyim1111-inlaks/huggingface/runs/bb86w4fx)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/evelyngyim1111-inlaks/huggingface/runs/bb86w4fx)
# whisper-small-finetuned
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_16_1 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3883
- eval_wer: 88.3854
- eval_runtime: 940.6564
- eval_samples_per_second: 0.702
- eval_steps_per_second: 0.045
- epoch: 2.116
- step: 300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 10
- training_steps: 500
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.5.1+cu124
- Datasets 2.19.2
- Tokenizers 0.19.1
|
mradermacher/HermesPlay-8B-slerp-GGUF
|
mradermacher
| 2025-02-26T15:29:09Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/Hermes-2-Theta-Llama-3-8B-32k",
"NousResearch/Hermes-3-Llama-3.1-8B",
"en",
"base_model:Sriexe/HermesPlay-8B-slerp",
"base_model:quantized:Sriexe/HermesPlay-8B-slerp",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T14:56:18Z |
---
base_model: Sriexe/HermesPlay-8B-slerp
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/Hermes-2-Theta-Llama-3-8B-32k
- NousResearch/Hermes-3-Llama-3.1-8B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sriexe/HermesPlay-8B-slerp
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/HermesPlay-8B-slerp-GGUF/resolve/main/HermesPlay-8B-slerp.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
wolfgangderigo/joe
|
wolfgangderigo
| 2025-02-26T15:28:43Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-02-26T14:02:24Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
rowankwang/Llama-3.3-70B-Instruct-Reference-ai_consciousness-f7ea1465
|
rowankwang
| 2025-02-26T15:28:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-02-26T15:25:04Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
jeahyun99/interviewer
|
jeahyun99
| 2025-02-26T15:27:57Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T15:27:57Z |
---
license: apache-2.0
---
|
vIDEO-Sophie-Rain-Spiderman-Updates/Sophie.Rain.Spiderman.New.Video.Tutorial.Official
|
vIDEO-Sophie-Rain-Spiderman-Updates
| 2025-02-26T15:25:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-26T15:23:16Z |
45 seconds ago
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">βΊβΊβ
πΎπππΎπ ππππ ==βΊβΊ ππͺπ‘π‘ πππππ€οΈβ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">π΄βΊπππππ ππππ π==βΊβΊ ππ¨π°π§π₯π¨ππ ππ¨π°β¬οΈβ¬οΈβ</a></p>
|
TobiGeth/tg_user_5600832597_lora_1740582832
|
TobiGeth
| 2025-02-26T15:25:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T15:25:17Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: USER_5600832597_1740582832
---
# Tg_User_5600832597_Lora_1740582832
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `USER_5600832597_1740582832` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TobiGeth/tg_user_5600832597_lora_1740582832', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf
|
RichardErkhov
| 2025-02-26T15:23:29Z | 0 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T14:35:17Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-2-2b-aio-retriever - GGUF
- Model creator: https://huggingface.co/atlimited/
- Original model: https://huggingface.co/atlimited/gemma-2-2b-aio-retriever/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma-2-2b-aio-retriever.Q2_K.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q2_K.gguf) | Q2_K | 1.15GB |
| [gemma-2-2b-aio-retriever.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.IQ3_XS.gguf) | IQ3_XS | 1.22GB |
| [gemma-2-2b-aio-retriever.IQ3_S.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.IQ3_S.gguf) | IQ3_S | 1.27GB |
| [gemma-2-2b-aio-retriever.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q3_K_S.gguf) | Q3_K_S | 1.27GB |
| [gemma-2-2b-aio-retriever.IQ3_M.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.IQ3_M.gguf) | IQ3_M | 1.3GB |
| [gemma-2-2b-aio-retriever.Q3_K.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q3_K.gguf) | Q3_K | 1.36GB |
| [gemma-2-2b-aio-retriever.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q3_K_M.gguf) | Q3_K_M | 1.36GB |
| [gemma-2-2b-aio-retriever.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q3_K_L.gguf) | Q3_K_L | 1.44GB |
| [gemma-2-2b-aio-retriever.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.IQ4_XS.gguf) | IQ4_XS | 1.47GB |
| [gemma-2-2b-aio-retriever.Q4_0.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q4_0.gguf) | Q4_0 | 1.52GB |
| [gemma-2-2b-aio-retriever.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.IQ4_NL.gguf) | IQ4_NL | 1.53GB |
| [gemma-2-2b-aio-retriever.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q4_K_S.gguf) | Q4_K_S | 1.53GB |
| [gemma-2-2b-aio-retriever.Q4_K.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q4_K.gguf) | Q4_K | 1.59GB |
| [gemma-2-2b-aio-retriever.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q4_K_M.gguf) | Q4_K_M | 1.59GB |
| [gemma-2-2b-aio-retriever.Q4_1.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q4_1.gguf) | Q4_1 | 1.64GB |
| [gemma-2-2b-aio-retriever.Q5_0.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q5_0.gguf) | Q5_0 | 1.75GB |
| [gemma-2-2b-aio-retriever.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q5_K_S.gguf) | Q5_K_S | 1.75GB |
| [gemma-2-2b-aio-retriever.Q5_K.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q5_K.gguf) | Q5_K | 1.79GB |
| [gemma-2-2b-aio-retriever.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q5_K_M.gguf) | Q5_K_M | 1.79GB |
| [gemma-2-2b-aio-retriever.Q5_1.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q5_1.gguf) | Q5_1 | 1.87GB |
| [gemma-2-2b-aio-retriever.Q6_K.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q6_K.gguf) | Q6_K | 2.0GB |
| [gemma-2-2b-aio-retriever.Q8_0.gguf](https://huggingface.co/RichardErkhov/atlimited_-_gemma-2-2b-aio-retriever-gguf/blob/main/gemma-2-2b-aio-retriever.Q8_0.gguf) | Q8_0 | 2.59GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TobiGeth/tg_user_6025318038_lora_1740582739
|
TobiGeth
| 2025-02-26T15:23:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T15:23:19Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: USER_6025318038_1740582739
---
# Tg_User_6025318038_Lora_1740582739
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `USER_6025318038_1740582739` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TobiGeth/tg_user_6025318038_lora_1740582739', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
nm-testing/Qwen2.5-VL-7B-Instruct-quantized.w8a8
|
nm-testing
| 2025-02-26T15:22:53Z | 297 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"vllm",
"vision",
"w8a8",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] |
image-text-to-text
| 2025-02-07T17:02:21Z |
---
tags:
- vllm
- vision
- w8a8
license: apache-2.0
license_link: >-
https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
---
# Qwen2.5-VL-7B-Instruct-quantized-w8a8
## Model Overview
- **Model Architecture:** Qwen/Qwen2.5-VL-7B-Instruct
- **Input:** Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT8
- **Activation quantization:** INT8
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
<details>
<summary>Model Creation Code</summary>
```python
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from qwen_vl_utils import process_vision_info
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import (
TraceableQwen2_5_VLForConditionalGeneration,
)
# Load model.
model_id = "Qwen/Qwen2.5-VL-7B-Instruct"
model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=0.01
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
# preprocess
buffered = BytesIO()
example["image"].save(buffered, format="PNG")
encoded_image = base64.b64encode(buffered.getvalue())
encoded_image_text = encoded_image.decode("utf-8")
base64_qwen = f"data:image;base64,{encoded_image_text}"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": base64_qwen},
{"type": "text", "text": "What does the image show?"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
# tokenize
return processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
# Define a oneshot data collator for multimodal inputs.
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = [
GPTQModifier(
targets="Linear",
scheme="W8A8",
sequential_targets=["Qwen2_5_VLDecoderLayer"],
ignore=["lm_head", "re:visual.*"],
),
]
SAVE_DIR==f"{model_id.split('/')[1]}-quantized.w8a8"
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)
```
</details>
## Evaluation
The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
<details>
<summary>Evaluation Commands</summary>
### Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
```
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp \
--eval_name <vision_task_name>
```
### Text-based Tasks
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto \
--output_path output_dir
```
#### MGSM
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
--tasks mgsm_cot_native \
--num_fewshot 0 \
--batch_size auto \
--output_path output_dir
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<th>Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="6"><b>Vision</b></td>
<td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>52.00</td>
<td>52.33</td>
<td>100.63%</td>
</tr>
<tr>
<td>VQAv2 (val)<br><i>vqa_match</i></td>
<td>75.59</td>
<td>75.46</td>
<td>99.83%</td>
</tr>
<tr>
<td>DocVQA (val)<br><i>anls</i></td>
<td>94.27</td>
<td>94.09</td>
<td>99.81%</td>
</tr>
<tr>
<td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td>
<td>86.44</td>
<td>86.16</td>
<td>99.68%</td>
</tr>
<tr>
<td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>69.47</td>
<td>70.47</td>
<td>101.44%</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>75.95</b></td>
<td><b>75.90</b></td>
<td><b>99.93%</b></td>
</tr>
<tr>
<td rowspan="3"><b>Text</b></td>
<td>MGSM (CoT)</td>
<td>58.72</td>
<td>59.92</td>
<td>102.04%</td>
</tr>
<tr>
<td>MMLU (5-shot)</td>
<td>71.09</td>
<td>70.57</td>
<td>99.27%</td>
</tr>
</tbody>
</table>
## Inference Performance
This model achieves up to 1.56x speedup in single-stream deployment and 1.5x in multi-stream deployment, depending on hardware and use-case scenario.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Latency (s)</th>
<th>Queries Per Dollar</th>
<th>Latency (s)th>
<th>Queries Per Dollar</th>
<th>Latency (s)</th>
<th>Queries Per Dollar</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A6000x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>4.9</td>
<td>912</td>
<td>3.2</td>
<td>1386</td>
<td>3.1</td>
<td>1431</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.50</td>
<td>3.6</td>
<td>1248</td>
<td>2.1</td>
<td>2163</td>
<td>2.0</td>
<td>2237</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>2.05</td>
<td>3.3</td>
<td>1351</td>
<td>1.4</td>
<td>3252</td>
<td>1.4</td>
<td>3321</td>
</tr>
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>2.8</td>
<td>707</td>
<td>1.7</td>
<td>1162</td>
<td>1.7</td>
<td>1198</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.24</td>
<td>2.4</td>
<td>851</td>
<td>1.4</td>
<td>1454</td>
<td>1.3</td>
<td>1512</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.49</td>
<td>2.2</td>
<td>912</td>
<td>1.1</td>
<td>1791</td>
<td>1.0</td>
<td>1950</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>2.0</td>
<td>557</td>
<td>1.2</td>
<td>919</td>
<td>1.2</td>
<td>941</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic</th>
<td>1.28</td>
<td>1.6</td>
<td>698</td>
<td>0.9</td>
<td>1181</td>
<td>0.9</td>
<td>1219</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.28</td>
<td>1.6</td>
<td>686</td>
<td>0.9</td>
<td>1191</td>
<td>0.9</td>
<td>1228</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A6000x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>0.4</td>
<td>1837</td>
<td>1.5</td>
<td>6846</td>
<td>1.7</td>
<td>7638</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.41</td>
<td>0.5</td>
<td>2297</td>
<td>2.3</td>
<td>10137</td>
<td>2.5</td>
<td>11472</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.60</td>
<td>0.4</td>
<td>1828</td>
<td>2.7</td>
<td>12254</td>
<td>3.4</td>
<td>15477</td>
</tr>
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>0.7</td>
<td>1347</td>
<td>2.6</td>
<td>5221</td>
<td>3.0</td>
<td>6122</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.27</td>
<td>0.8</td>
<td>1639</td>
<td>3.4</td>
<td>6851</td>
<td>3.9</td>
<td>7918</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.21</td>
<td>0.7</td>
<td>1314</td>
<td>3.0</td>
<td>5983</td>
<td>4.6</td>
<td>9206</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>0.9</td>
<td>969</td>
<td>3.1</td>
<td>3358</td>
<td>3.3</td>
<td>3615</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic</th>
<td>1.29</td>
<td>1.2</td>
<td>1331</td>
<td>3.8</td>
<td>4109</td>
<td>4.2</td>
<td>4598</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.28</td>
<td>1.2</td>
<td>1298</td>
<td>3.8</td>
<td>4190</td>
<td>4.2</td>
<td>4573</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPS: Queries per second.
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
|
PrunaAI/facebook-MobileLLM-125M-bnb-4bit-smashed
|
PrunaAI
| 2025-02-26T15:22:46Z | 0 | 0 | null |
[
"safetensors",
"mobilellm",
"pruna-ai",
"custom_code",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-26T15:22:34Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/facebook-MobileLLM-125M-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
Tiptopof6/chan
|
Tiptopof6
| 2025-02-26T15:21:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T14:54:09Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: chan
---
# Chan
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `chan` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Tiptopof6/chan', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Lawnakk/BBA100
|
Lawnakk
| 2025-02-26T15:19:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2",
"base_model:merge:Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-Math-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T15:14:25Z |
---
base_model:
- Qwen/Qwen2.5-Math-7B-Instruct
- Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct)
* [Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2
layer_range:
- 0
- 28
- model: Qwen/Qwen2.5-Math-7B-Instruct
layer_range:
- 0
- 28
merge_method: slerp
base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
JacksonBrune/2e711180-acd5-4fa8-bf1a-ddc6699ec146
|
JacksonBrune
| 2025-02-26T15:17:53Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"region:us"
] | null | 2025-02-26T15:17:40Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: DeepMount00/Llama-3-8b-Ita
model-index:
- name: JacksonBrune/2e711180-acd5-4fa8-bf1a-ddc6699ec146
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JacksonBrune/2e711180-acd5-4fa8-bf1a-ddc6699ec146
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
nm-testing/Qwen2.5-VL-7B-Instruct-FP8-Dynamic
|
nm-testing
| 2025-02-26T15:17:09Z | 609 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"vllm",
"vision",
"fp8",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
image-text-to-text
| 2025-02-06T16:29:20Z |
---
tags:
- vllm
- vision
- fp8
license: apache-2.0
license_link: >-
https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
---
# Qwen2.5-VL-7B-Instruct-FP8-Dynamic
## Model Overview
- **Model Architecture:** Qwen2.5-VL-7B-Instruct
- **Input:** Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
<details>
<summary>Model Creation Code</summary>
```python
import requests
import torch
from PIL import Image
from transformers import AutoProcessor
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import (
TraceableQwen2_5_VLForConditionalGeneration,
)
from llmcompressor.modifiers.quantization import QuantizationModifier
# Load model.
model_id = Qwen/Qwen2.5-VL-7B-Instruct
model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
model_id, device_map="auto", torch_dtype="auto"
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Recipe
recipe = [
QuantizationModifier(
targets="Linear",
scheme="FP8_DYNAMIC",
sequential_targets=["MistralDecoderLayer"],
ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
),
]
SAVE_DIR=f"{model_id.split('/')[1]}-FP8-Dynamic"
# Perform oneshot
oneshot(
model=model,
recipe=recipe,
trust_remote_code_model=True,
output_dir=SAVE_DIR
)
```
</details>
## Evaluation
The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
<details>
<summary>Evaluation Commands</summary>
### Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
```
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp \
--eval_name <vision_task_name>
```
### Text-based Tasks
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto \
--output_path output_dir
```
#### MGSM
```
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
--tasks mgsm_cot_native \
--num_fewshot 0 \
--batch_size auto \
--output_path output_dir
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="6"><b>Vision</b></td>
<td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>52.00</td>
<td>52.55</td>
<td>101.06%</td>
</tr>
<tr>
<td>VQAv2 (val)<br><i>vqa_match</i></td>
<td>75.59</td>
<td>75.79</td>
<td>100.26%</td>
</tr>
<tr>
<td>DocVQA (val)<br><i>anls</i></td>
<td>94.27</td>
<td>94.27</td>
<td>100.00%</td>
</tr>
<tr>
<td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td>
<td>86.44</td>
<td>86.80</td>
<td>100.42%</td>
</tr>
<tr>
<td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
<td>69.47</td>
<td>71.07</td>
<td>102.31%</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>75.95</b></td>
<td><b>76.50</b></td>
<td><b>100.73%</b></td>
</tr>
<tr>
<td rowspan="2"><b>Text</b></td>
<td>MGSM (CoT)</td>
<td>58.72</td>
<td>55.34</td>
<td>94.24%</td>
</tr>
<tr>
<td>MMLU (5-shot)</td>
<td>71.09</td>
<td>70.98</td>
<td>99.85%</td>
</tr>
</tbody>
</table>
## Inference Performance
This model achieves up to 1.3x speedup in single-stream deployment and 1.37x in multi-stream deployment, depending on hardware and use-case scenario.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Latency (s)</th>
<th>Queries Per Dollar</th>
<th>Latency (s)th>
<th>Queries Per Dollar</th>
<th>Latency (s)</th>
<th>Queries Per Dollar</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A6000x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>4.9</td>
<td>912</td>
<td>3.2</td>
<td>1386</td>
<td>3.1</td>
<td>1431</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.50</td>
<td>3.6</td>
<td>1248</td>
<td>2.1</td>
<td>2163</td>
<td>2.0</td>
<td>2237</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>2.05</td>
<td>3.3</td>
<td>1351</td>
<td>1.4</td>
<td>3252</td>
<td>1.4</td>
<td>3321</td>
</tr>
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>2.8</td>
<td>707</td>
<td>1.7</td>
<td>1162</td>
<td>1.7</td>
<td>1198</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.24</td>
<td>2.4</td>
<td>851</td>
<td>1.4</td>
<td>1454</td>
<td>1.3</td>
<td>1512</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.49</td>
<td>2.2</td>
<td>912</td>
<td>1.1</td>
<td>1791</td>
<td>1.0</td>
<td>1950</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>2.0</td>
<td>557</td>
<td>1.2</td>
<td>919</td>
<td>1.2</td>
<td>941</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic</th>
<td>1.28</td>
<td>1.6</td>
<td>698</td>
<td>0.9</td>
<td>1181</td>
<td>0.9</td>
<td>1219</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.28</td>
<td>1.6</td>
<td>686</td>
<td>0.9</td>
<td>1191</td>
<td>0.9</td>
<td>1228</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
<table border="1" class="dataframe">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
<th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
<th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
</tr>
<tr>
<th>Hardware</th>
<th>Model</th>
<th>Average Cost Reduction</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
<th>Maximum throughput (QPS)</th>
<th>Queries Per Dollar</th>
</tr>
</thead>
<tbody style="text-align: center">
<tr>
<th rowspan="3" valign="top">A6000x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>0.4</td>
<td>1837</td>
<td>1.5</td>
<td>6846</td>
<td>1.7</td>
<td>7638</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.41</td>
<td>0.5</td>
<td>2297</td>
<td>2.3</td>
<td>10137</td>
<td>2.5</td>
<td>11472</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.60</td>
<td>0.4</td>
<td>1828</td>
<td>2.7</td>
<td>12254</td>
<td>3.4</td>
<td>15477</td>
</tr>
<tr>
<th rowspan="3" valign="top">A100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>0.7</td>
<td>1347</td>
<td>2.6</td>
<td>5221</td>
<td>3.0</td>
<td>6122</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w8a8</th>
<td>1.27</td>
<td>0.8</td>
<td>1639</td>
<td>3.4</td>
<td>6851</td>
<td>3.9</td>
<td>7918</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.21</td>
<td>0.7</td>
<td>1314</td>
<td>3.0</td>
<td>5983</td>
<td>4.6</td>
<td>9206</td>
</tr>
<tr>
<th rowspan="3" valign="top">H100x1</th>
<th>Qwen/Qwen2.5-VL-7B-Instruct</th>
<td></td>
<td>0.9</td>
<td>969</td>
<td>3.1</td>
<td>3358</td>
<td>3.3</td>
<td>3615</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-FP8-Dynamic</th>
<td>1.29</td>
<td>1.2</td>
<td>1331</td>
<td>3.8</td>
<td>4109</td>
<td>4.2</td>
<td>4598</td>
</tr>
<tr>
<th>neuralmagic/Qwen2.5-VL-7B-Instruct-quantized.w4a16</th>
<td>1.28</td>
<td>1.2</td>
<td>1298</td>
<td>3.8</td>
<td>4190</td>
<td>4.2</td>
<td>4573</td>
</tr>
</tbody>
</table>
**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens
**QPS: Queries per second.
**QPD: Queries per dollar, based on on-demand cost at [Lambda Labs](https://lambdalabs.com/service/gpu-cloud) (observed on 2/18/2025).
|
p2kalita/donut-title-bmw1
|
p2kalita
| 2025-02-26T15:16:43Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-02-26T14:01:23Z |
---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-title-bmw1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-title-bmw1
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.20.3
|
RichardErkhov/SH198_-_counselor-gguf
|
RichardErkhov
| 2025-02-26T15:16:18Z | 0 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T14:28:44Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
counselor - GGUF
- Model creator: https://huggingface.co/SH198/
- Original model: https://huggingface.co/SH198/counselor/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [counselor.Q2_K.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q2_K.gguf) | Q2_K | 1.15GB |
| [counselor.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.IQ3_XS.gguf) | IQ3_XS | 1.22GB |
| [counselor.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.IQ3_S.gguf) | IQ3_S | 1.27GB |
| [counselor.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q3_K_S.gguf) | Q3_K_S | 1.27GB |
| [counselor.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.IQ3_M.gguf) | IQ3_M | 1.3GB |
| [counselor.Q3_K.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q3_K.gguf) | Q3_K | 1.36GB |
| [counselor.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q3_K_M.gguf) | Q3_K_M | 1.36GB |
| [counselor.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q3_K_L.gguf) | Q3_K_L | 1.44GB |
| [counselor.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.IQ4_XS.gguf) | IQ4_XS | 1.47GB |
| [counselor.Q4_0.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q4_0.gguf) | Q4_0 | 1.52GB |
| [counselor.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.IQ4_NL.gguf) | IQ4_NL | 1.53GB |
| [counselor.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q4_K_S.gguf) | Q4_K_S | 1.53GB |
| [counselor.Q4_K.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q4_K.gguf) | Q4_K | 1.59GB |
| [counselor.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q4_K_M.gguf) | Q4_K_M | 1.59GB |
| [counselor.Q4_1.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q4_1.gguf) | Q4_1 | 1.64GB |
| [counselor.Q5_0.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q5_0.gguf) | Q5_0 | 1.75GB |
| [counselor.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q5_K_S.gguf) | Q5_K_S | 1.75GB |
| [counselor.Q5_K.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q5_K.gguf) | Q5_K | 1.79GB |
| [counselor.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q5_K_M.gguf) | Q5_K_M | 1.79GB |
| [counselor.Q5_1.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q5_1.gguf) | Q5_1 | 1.87GB |
| [counselor.Q6_K.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q6_K.gguf) | Q6_K | 2.0GB |
| [counselor.Q8_0.gguf](https://huggingface.co/RichardErkhov/SH198_-_counselor-gguf/blob/main/counselor.Q8_0.gguf) | Q8_0 | 2.59GB |
Original model description:
---
library_name: transformers
datasets:
- SH198/counselor
language:
- en
base_model:
- google/gemma-2-2b-it
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jason-luo/gemma-2-2B-it-thinking-function_calling-V0
|
Jason-luo
| 2025-02-26T15:15:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T15:11:01Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Jason-luo/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.6.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
PrunaAI/AnatoliiPotapov-T-lite-instruct-0.1-HQQ-4bit-smashed
|
PrunaAI
| 2025-02-26T15:11:37Z | 0 | 0 | null |
[
"llama",
"pruna-ai",
"hqq",
"region:us"
] | null | 2025-02-26T15:04:45Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/AnatoliiPotapov-T-lite-instruct-0.1-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/AnatoliiPotapov-T-lite-instruct-0.1-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF
|
mradermacher
| 2025-02-26T15:11:10Z | 11 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"grpo",
"en",
"base_model:valoomba/Rombo-V3.1-32B-Reasoner",
"base_model:quantized:valoomba/Rombo-V3.1-32B-Reasoner",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-26T02:57:59Z |
---
base_model: valoomba/Rombo-V3.1-32B-Reasoner
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/valoomba/Rombo-V3.1-32B-Reasoner
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Rombo-V3.1-32B-Reasoner-i1-GGUF/resolve/main/Rombo-V3.1-32B-Reasoner.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kavish218/bt_des_complete_1b_v1
|
kavish218
| 2025-02-26T15:10:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T15:09:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ADFLYSD/NAVEEEE
|
ADFLYSD
| 2025-02-26T15:09:38Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T15:09:38Z |
---
license: apache-2.0
---
|
andro-flock/Liberty-LibertyMain-Inpainting
|
andro-flock
| 2025-02-26T15:08:43Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"Safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] |
text-to-image
| 2025-02-25T14:00:28Z |
---
library_name: diffusers
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- Safetensors
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
# Liberty_LibertyMain Inpainting

### Description:
> <p>This is the version you should be using <strong>for inpainting</strong>:</p><ul><li><p><em>VAE is included in the model.</em></p></li><li><p>Remember de filename <strong>must finish in <em>inpainting</em></strong></p></li><li><p><em>CLIP is fixed.</em></p></li></ul>
### Civitai Page: https://civitai.com/models/6937
You can use this with the [π§¨Diffusers library](https://github.com/huggingface/diffusers)
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "andro-flock/Liberty-LibertyMain-Inpainting"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "masterpiece, best quality, 1girl, (colorful),(delicate eyes and face), volumatic light, ray tracing, bust shot ,extremely detailed CG unity 8k wallpaper,solo,smile"
image = pipe(prompt).images[0]
image.save("result.png")
```
|
daniel40/925d4a6d-bd79-4ba4-853f-77057c934361
|
daniel40
| 2025-02-26T15:07:21Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"region:us"
] | null | 2025-02-26T15:07:03Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: DeepMount00/Llama-3-8b-Ita
model-index:
- name: daniel40/925d4a6d-bd79-4ba4-853f-77057c934361
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel40/925d4a6d-bd79-4ba4-853f-77057c934361
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
cyzcas/ppo-Huggy
|
cyzcas
| 2025-02-26T15:04:24Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-02-26T15:04:19Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: cyzcas/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
ButterChicken98/pv_sd2-lora_rank_64_bact_spot
|
ButterChicken98
| 2025-02-26T15:02:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-02-26T12:40:06Z |
---
base_model: stabilityai/stable-diffusion-2
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - ButterChicken98/pv_sd2-lora_rank_64_bact_spot
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the ButterChicken98/controlnet_canny_segmented_tomato_Tomato_Bacterial_spot dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
TranVanMinh/dummy-model
|
TranVanMinh
| 2025-02-26T15:00:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-02-26T14:33:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF
|
mradermacher
| 2025-02-26T15:00:07Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0",
"base_model:quantized:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-26T07:21:18Z |
---
base_model: Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeash_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeash_R1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TobiGeth/tg_user_443574186_lora_1740581360
|
TobiGeth
| 2025-02-26T14:59:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T14:59:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: USER_443574186_1740581360
---
# Tg_User_443574186_Lora_1740581360
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `USER_443574186_1740581360` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TobiGeth/tg_user_443574186_lora_1740581360', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
JayHyeon/Qwen_0.5-rDPO_3e-6-1ep_0vpo_const_0.1
|
JayHyeon
| 2025-02-26T14:57:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T12:56:06Z |
---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-rDPO_3e-6-1ep_0vpo_const_0.1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-rDPO_3e-6-1ep_0vpo_const_0.1
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-rDPO_3e-6-1ep_0vpo_const_0.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/wdcfvil9)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bootylordoftheroundtable/Q25-1.5B-VeoLu-OwO-fied-Q8_0-GGUF
|
bootylordoftheroundtable
| 2025-02-26T14:57:55Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:SaisExperiments/Q25-1.5B-VeoLu-OwO-fied",
"base_model:quantized:SaisExperiments/Q25-1.5B-VeoLu-OwO-fied",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T14:57:43Z |
---
license: apache-2.0
base_model: SaisExperiments/Q25-1.5B-VeoLu-OwO-fied
tags:
- llama-cpp
- gguf-my-repo
---
# bootylordoftheroundtable/Q25-1.5B-VeoLu-OwO-fied-Q8_0-GGUF
This model was converted to GGUF format from [`SaisExperiments/Q25-1.5B-VeoLu-OwO-fied`](https://huggingface.co/SaisExperiments/Q25-1.5B-VeoLu-OwO-fied) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SaisExperiments/Q25-1.5B-VeoLu-OwO-fied) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bootylordoftheroundtable/Q25-1.5B-VeoLu-OwO-fied-Q8_0-GGUF --hf-file q25-1.5b-veolu-owo-fied-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bootylordoftheroundtable/Q25-1.5B-VeoLu-OwO-fied-Q8_0-GGUF --hf-file q25-1.5b-veolu-owo-fied-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bootylordoftheroundtable/Q25-1.5B-VeoLu-OwO-fied-Q8_0-GGUF --hf-file q25-1.5b-veolu-owo-fied-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bootylordoftheroundtable/Q25-1.5B-VeoLu-OwO-fied-Q8_0-GGUF --hf-file q25-1.5b-veolu-owo-fied-q8_0.gguf -c 2048
```
|
TobiGeth/tg_user_712887841_lora_1740581177
|
TobiGeth
| 2025-02-26T14:57:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T14:57:48Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: USER_712887841_1740581177
---
# Tg_User_712887841_Lora_1740581177
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `USER_712887841_1740581177` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TobiGeth/tg_user_712887841_lora_1740581177', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
RaviKanur/Phi-3.5-mini-4k-instruct-text2sql
|
RaviKanur
| 2025-02-26T14:56:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-30T14:31:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nofing/qwen2.5-Instruct-1.5B-ere
|
Nofing
| 2025-02-26T14:54:54Z | 26 | 0 | null |
[
"safetensors",
"qwen2",
"ERE",
"text2text-generation",
"en",
"dataset:Nofing/maven-ere-llm",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:unknown",
"region:us"
] |
text2text-generation
| 2025-02-25T14:52:32Z |
---
license: unknown
datasets:
- Nofing/maven-ere-llm
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text2text-generation
tags:
- ERE
---
|
TobiGeth/tg_user_634033363_lora_1740580924
|
TobiGeth
| 2025-02-26T14:53:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T14:53:53Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: USER_634033363_1740580924
---
# Tg_User_634033363_Lora_1740580924
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `USER_634033363_1740580924` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TobiGeth/tg_user_634033363_lora_1740580924', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
SaisExperiments/Q25-1.5B-VeoLu-OwO-fied
|
SaisExperiments
| 2025-02-26T14:53:54Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T14:50:55Z |
---
license: apache-2.0
---
|
Elcaida/test2
|
Elcaida
| 2025-02-26T14:52:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T14:51:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF
|
mradermacher
| 2025-02-26T14:52:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/JAJUKA-WEWILLNEVERFORGETYOU-3B",
"base_model:quantized:mergekit-community/JAJUKA-WEWILLNEVERFORGETYOU-3B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T09:59:47Z |
---
base_model: mergekit-community/JAJUKA-WEWILLNEVERFORGETYOU-3B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/JAJUKA-WEWILLNEVERFORGETYOU-3B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/JAJUKA-WEWILLNEVERFORGETYOU-3B-GGUF/resolve/main/JAJUKA-WEWILLNEVERFORGETYOU-3B.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.