modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 18:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 18:25:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
santwilliam/DeepSeek-R1-8B-MBTI-switmer-MBTI-Sentiment | santwilliam | 2025-03-06T11:48:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T11:48:05Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** santwilliam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Grogros/dmWM-llama-3.2-1B-Instruct-KGW-d4-allData | Grogros | 2025-03-06T11:47:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T08:12:26Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
model-index:
- name: dmWM-llama-3.2-1B-Instruct-KGW-d4-allData
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dmWM-llama-3.2-1B-Instruct-KGW-d4-allData
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2500
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1.post303
- Datasets 3.2.0
- Tokenizers 0.20.3
|
dd-0pal/mistral_instruct_v4_KQL-Q8_0-GGUF | dd-0pal | 2025-03-06T11:45:18Z | 0 | 0 | peft | [
"peft",
"gguf",
"trl",
"sft",
"generated_from_trainer",
"llama-cpp",
"gguf-my-lora",
"dataset:generator",
"base_model:aisha44/mistral_instruct_v4_KQL",
"base_model:adapter:aisha44/mistral_instruct_v4_KQL",
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T11:45:16Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- llama-cpp
- gguf-my-lora
base_model: aisha44/mistral_instruct_v4_KQL
datasets:
- generator
model-index:
- name: mistral_instruct_v4_KQL
results: []
---
# dd-0pal/mistral_instruct_v4_KQL-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`aisha44/mistral_instruct_v4_KQL`](https://huggingface.co/aisha44/mistral_instruct_v4_KQL) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/aisha44/mistral_instruct_v4_KQL) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora mistral_instruct_v4_KQL-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora mistral_instruct_v4_KQL-q8_0.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
EDashMan/emmanuel | EDashMan | 2025-03-06T11:42:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-06T11:25:44Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Emmanuel
---
# Emmanuel
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Emmanuel` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('EDashMan/emmanuel', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ethz-spylab/Llama-3.1-70B-Instruct_do_biology_again_5e-5 | ethz-spylab | 2025-03-06T11:42:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-05T15:46:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
youralien/roberta-cbl-Empathy-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current | youralien | 2025-03-06T11:41:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-06T10:31:34Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-cbl-Empathy-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-cbl-Empathy-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4714
- Accuracy: 0.8023
- Precision: 0.6350
- Recall: 0.8735
- F1: 0.7354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.669740264596789e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5351 | 1.0 | 243 | 0.4195 | 0.8151 | 0.7095 | 0.6980 | 0.7037 |
| 0.4572 | 2.0 | 486 | 0.4694 | 0.7189 | 0.5294 | 0.9551 | 0.6812 |
| 0.3823 | 3.0 | 729 | 0.4196 | 0.8010 | 0.6398 | 0.8408 | 0.7266 |
| 0.301 | 4.0 | 972 | 0.4714 | 0.8023 | 0.6350 | 0.8735 | 0.7354 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0
|
TreLiam/LLMJudge-Qwen2.5-14B-Instruct-1M | TreLiam | 2025-03-06T11:40:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-05T20:44:58Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MCG-NJU/MoG | MCG-NJU | 2025-03-06T11:40:06Z | 5 | 1 | MoG | [
"MoG",
"arxiv:2501.03699",
"license:apache-2.0",
"region:us"
] | null | 2025-02-28T06:21:11Z | ---
license: apache-2.0
library_name: MoG
---
# MoG: Motion-Aware Generative Frame Interpolation
<!-- <p style="display: flex; flex-direction: column; justify-content: center; align-items: center;">
<div style="width: 100%; text-align: center; margin-bottom: 4px;">
<img src="examples/1.gif" style="zoom:32%;">
<img src="examples/2.gif" style="zoom:32%;">
<img src="examples/3.gif" style="zoom:32%;">
</div>
<div style="width: 100%; text-align: center;">
<img src="examples/4.gif" style="zoom:32%;">
<img src="examples/5.gif" style="zoom:32%;">
<img src="examples/6.gif" style="zoom:32%;">
</div>
</p>
-->
<div style="text-align: center;">
<img src="examples/1.gif" style="width: 32%; display: inline-block;">
<img src="examples/2.gif" style="width: 32%; display: inline-block;">
<img src="examples/3.gif" style="width: 32%; display: inline-block;">
</div>
<div style="text-align: center;">
<img src="examples/4.gif" style="width: 32%; display: inline-block;">
<img src="examples/5.gif" style="width: 32%; display: inline-block;">
<img src="examples/6.gif" style="width: 32%; display: inline-block;">
</div>
MoG is a generative video frame interpolation (VFI) model, designed to synthesize intermediate frames between two input frames.
MoG is the first VFI framework to bridge the gap between flow-based stability and generative flexibility. We introduce a dual-level guidance injection design to constrain generated motion using motion trajectories derived from optical flow. To enhance the generative model's ability to dynamically correct flow errors, we implement encoder-only guidance injection and selective parameter fine-tuning. As a result, MoG achieves significant improvements over existing open-source generative VFI methods, delivering superior performance in both real-world and animated scenarios.
Source code is available at [https://github.com/MCG-NJU/MoG-VFI](https://github.com/MCG-NJU/MoG-VFI).
## Network Arichitecture

## Model Description
- **Developed by:** Nanjing University, Tencent PCG
- **Model type:** Generative video frame interploation model, takes two still video frames as input.
- **Arxiv paper**: [https://arxiv.org/pdf/2501.03699](https://arxiv.org/pdf/2501.03699)
- **Project page:** [https://mcg-nju.github.io/MoG_Web/](https://mcg-nju.github.io/MoG_Web/)
- **Repository**: [https://github.com/MCG-NJU/MoG-VFI](https://github.com/MCG-NJU/MoG-VFI)
- **License:** Apache 2.0 license.
# Usage
We provide two model checkpoints: `real.ckpt` for real-world scenes and `ani.ckpt` for animation scenes. For detailed instructions on loading the checkpoints and performing inference, please refer to our [official repository](https://github.com/MCG-NJU/MoG-VFI).
## Citation
If you find our code useful or our work relevant, please consider citing:
```
@article{zhang2025motion,
title={Motion-Aware Generative Frame Interpolation},
author={Zhang, Guozhen and Zhu, Yuhan and Cui, Yutao and Zhao, Xiaotong and Ma, Kai and Wang, Limin},
journal={arXiv preprint arXiv:2501.03699},
year={2025}
}
``` |
Ashed00/toxic_text_gen | Ashed00 | 2025-03-06T11:39:01Z | 10 | 0 | null | [
"safetensors",
"gpt2",
"en",
"dataset:OxAISH-AL-LLM/wiki_toxic",
"dataset:textdetox/multilingual_toxic_spans",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2025-03-05T17:59:04Z | ---
license: mit
datasets:
- OxAISH-AL-LLM/wiki_toxic
- textdetox/multilingual_toxic_spans
language:
- en
base_model:
- openai-community/gpt2
---
# Model Card for Toxic Text GEN
This model is a decision Tranformer for text generation with controlled toxicity (0-1).
## Model Details
### Model Description
Made using a decision transformer, it can generate toxic sentences based on a toxicity control (defined as reward-to-go/rtg).
Current text generation is not very coherent due to lack of variety in training data and low compute.
- **Developed by:** [Ashed00]
- **Finetuned from model:** [GPT-2]
### Model Sources [optional]
- **Repository:** [https://github.com/Ashu-00/NLP-Implementations/tree/main/Decision_Transformer]
- **Demo:** Soon
## Uses
Fun, little experiment.
## Bias, Risks, and Limitations
This model is biased based on its training data. I take no responsibility for its generation.
Most generated text is non-coherent due to lack of variety of training data.
## How to Get Started with the Model
```python
import torch.nn.functional as F
def generate_conditioned_text2(model, tokenizer, prompt, target_rtg, max_length=50, temperature=1.0, top_k=50):
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(device)
attention_mask = inputs["attention_mask"].to(device)
# Create RTG tensor with the target value for each token in the prompt
rtg = torch.tensor([[target_rtg] * input_ids.shape[1]], dtype=torch.float).to(device)
seq_length = input_ids.shape[1]
for _ in range(max_length):
with torch.no_grad():
# Slice rtg to match current sequence length
rtg_current = rtg[:, :seq_length]
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask,
rtg=rtg_current,
return_dict=True
)
# Get next token logits and apply temperature scaling
next_token_logits = outputs["logits"][:, -1, :] / temperature
# Apply top-k filtering
top_k_logits, top_k_indices = torch.topk(next_token_logits, top_k)
probabilities = F.softmax(top_k_logits, dim=-1)
next_token = top_k_indices[0, torch.multinomial(probabilities, num_samples=1)]
# Append the predicted token to input_ids and update attention mask
input_ids = torch.cat([input_ids, next_token], dim=-1)
attention_mask = torch.cat([attention_mask, torch.ones_like(next_token)], dim=-1)
# Append the target reward for the new token
new_rtg = torch.tensor([[target_rtg]], dtype=torch.float).to(device)
rtg = torch.cat([rtg, new_rtg], dim=1)
# Stop if EOS token is generated
if next_token.item() == tokenizer.eos_token_id:
break
seq_length += 1
return tokenizer.decode(input_ids[0], skip_special_tokens=True)
less_toxic_text = generate_conditioned_text2(model, tokenizer, prompt, target_rtg=1)
more_toxic_text = generate_conditioned_text2(model, tokenizer, prompt, target_rtg=0.0)
avg_toxic = generate_conditioned_text2(model,tokenizer, prompt, target_rtg=0.5 )
print("More Toxic Text:", less_toxic_text)
print("Less Toxic Text:", more_toxic_text)
print("Avg Toxic Text:", avg_toxic)
```
## Training Details
Refer to the github for training datasets and procedure. |
mounikasmlk/facebook-opt-350m-thinking-function-calling-v | mounikasmlk | 2025-03-06T11:36:23Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T11:36:08Z | ---
base_model: facebook/opt-350m
library_name: transformers
model_name: facebook-opt-350m-thinking-function-calling-v
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for facebook-opt-350m-thinking-function-calling-v
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mounikasmlk/facebook-opt-350m-thinking-function-calling-v", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
TFOCUS/bruno_tester_20 | TFOCUS | 2025-03-06T11:36:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T11:33:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF | mradermacher | 2025-03-06T11:35:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cognitivecomputations",
"r1",
"cot",
"deepseek",
"Qwen2.5",
"Hermes",
"DeepHermes",
"128k context",
"fine tune",
"merge",
"en",
"base_model:DavidAU/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B",
"base_model:quantized:DavidAU/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T10:59:16Z | ---
base_model: DavidAU/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cognitivecomputations
- r1
- cot
- deepseek
- Qwen2.5
- Hermes
- DeepHermes
- 128k context
- fine tune
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q2_K.gguf) | Q2_K | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q6_K.gguf) | Q6_K | 3.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.Q8_0.gguf) | Q8_0 | 4.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-GGUF/resolve/main/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B.f16.gguf) | f16 | 8.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
yusuf-aslan/turkish-text-quality-analyzer | yusuf-aslan | 2025-03-06T11:35:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-turkish-128k-uncased",
"base_model:finetune:dbmdz/bert-base-turkish-128k-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-06T10:17:27Z | ---
library_name: transformers
license: mit
base_model: dbmdz/bert-base-turkish-128k-uncased
tags:
- generated_from_trainer
model-index:
- name: turkish-text-quality-analyzer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# turkish-text-quality-analyzer
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2913
- Mse: 0.2913
- Rmse: 0.5398
- Mae: 0.4168
- R2: 0.6874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|
| 0.3166 | 1.0 | 1490 | 0.3102 | 0.3102 | 0.5570 | 0.4322 | 0.6672 |
| 0.2561 | 2.0 | 2980 | 0.2913 | 0.2913 | 0.5398 | 0.4168 | 0.6874 |
| 0.1711 | 3.0 | 4470 | 0.3395 | 0.3395 | 0.5827 | 0.4464 | 0.6357 |
| 0.0981 | 4.0 | 5960 | 0.3376 | 0.3376 | 0.5811 | 0.4433 | 0.6377 |
| 0.063 | 5.0 | 7450 | 0.3275 | 0.3275 | 0.5723 | 0.4321 | 0.6486 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
pabloma09/layoutlm-FUNSDxSynthetic-5fold | pabloma09 | 2025-03-06T11:33:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-03-06T10:15:31Z | ---
library_name: transformers
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
model-index:
- name: layoutlm-FUNSDxSynthetic-5fold
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-FUNSDxSynthetic-5fold
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0137
- Eader: {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71}
- Nswer: {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256}
- Uestion: {'precision': 0.9818181818181818, 'recall': 0.989010989010989, 'f1': 0.9854014598540145, 'number': 273}
- Overall Precision: 0.9850
- Overall Recall: 0.9867
- Overall F1: 0.9858
- Overall Accuracy: 0.9965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Eader | Nswer | Uestion | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0504 | 1.0 | 11 | 0.0183 | {'precision': 0.9722222222222222, 'recall': 0.9859154929577465, 'f1': 0.979020979020979, 'number': 71} | {'precision': 0.984375, 'recall': 0.984375, 'f1': 0.984375, 'number': 256} | {'precision': 0.9782608695652174, 'recall': 0.989010989010989, 'f1': 0.9836065573770493, 'number': 273} | 0.9801 | 0.9867 | 0.9834 | 0.9956 |
| 0.0459 | 2.0 | 22 | 0.0175 | {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71} | {'precision': 0.98828125, 'recall': 0.98828125, 'f1': 0.98828125, 'number': 256} | {'precision': 0.9747292418772563, 'recall': 0.989010989010989, 'f1': 0.9818181818181818, 'number': 273} | 0.9801 | 0.9867 | 0.9834 | 0.9953 |
| 0.0289 | 3.0 | 33 | 0.0146 | {'precision': 0.9859154929577465, 'recall': 0.9859154929577465, 'f1': 0.9859154929577465, 'number': 71} | {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256} | {'precision': 0.9890909090909091, 'recall': 0.9963369963369964, 'f1': 0.9927007299270072, 'number': 273} | 0.9900 | 0.9917 | 0.9908 | 0.9965 |
| 0.0243 | 4.0 | 44 | 0.0157 | {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71} | {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256} | {'precision': 0.9782608695652174, 'recall': 0.989010989010989, 'f1': 0.9836065573770493, 'number': 273} | 0.9834 | 0.9867 | 0.9850 | 0.9958 |
| 0.0215 | 5.0 | 55 | 0.0129 | {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71} | {'precision': 0.9921875, 'recall': 0.9921875, 'f1': 0.9921875, 'number': 256} | {'precision': 0.9818181818181818, 'recall': 0.989010989010989, 'f1': 0.9854014598540145, 'number': 273} | 0.9850 | 0.9883 | 0.9867 | 0.9975 |
| 0.0176 | 6.0 | 66 | 0.0151 | {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71} | {'precision': 0.9921875, 'recall': 0.9921875, 'f1': 0.9921875, 'number': 256} | {'precision': 0.9782608695652174, 'recall': 0.989010989010989, 'f1': 0.9836065573770493, 'number': 273} | 0.9834 | 0.9883 | 0.9859 | 0.9963 |
| 0.0151 | 7.0 | 77 | 0.0149 | {'precision': 0.9583333333333334, 'recall': 0.971830985915493, 'f1': 0.965034965034965, 'number': 71} | {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256} | {'precision': 0.9817518248175182, 'recall': 0.9853479853479854, 'f1': 0.9835466179159049, 'number': 273} | 0.9834 | 0.985 | 0.9842 | 0.9968 |
| 0.0136 | 8.0 | 88 | 0.0142 | {'precision': 0.9583333333333334, 'recall': 0.971830985915493, 'f1': 0.965034965034965, 'number': 71} | {'precision': 0.98828125, 'recall': 0.98828125, 'f1': 0.98828125, 'number': 256} | {'precision': 0.9781818181818182, 'recall': 0.9853479853479854, 'f1': 0.9817518248175183, 'number': 273} | 0.9801 | 0.985 | 0.9825 | 0.9965 |
| 0.0136 | 9.0 | 99 | 0.0148 | {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71} | {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256} | {'precision': 0.9818181818181818, 'recall': 0.989010989010989, 'f1': 0.9854014598540145, 'number': 273} | 0.9850 | 0.9867 | 0.9858 | 0.9963 |
| 0.0103 | 10.0 | 110 | 0.0138 | {'precision': 0.9859154929577465, 'recall': 0.9859154929577465, 'f1': 0.9859154929577465, 'number': 71} | {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256} | {'precision': 0.9854545454545455, 'recall': 0.9926739926739927, 'f1': 0.989051094890511, 'number': 273} | 0.9884 | 0.99 | 0.9892 | 0.9965 |
| 0.0091 | 11.0 | 121 | 0.0136 | {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71} | {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256} | {'precision': 0.9854014598540146, 'recall': 0.989010989010989, 'f1': 0.9872029250457038, 'number': 273} | 0.9867 | 0.9867 | 0.9867 | 0.9968 |
| 0.0081 | 12.0 | 132 | 0.0131 | {'precision': 0.9859154929577465, 'recall': 0.9859154929577465, 'f1': 0.9859154929577465, 'number': 71} | {'precision': 0.98828125, 'recall': 0.98828125, 'f1': 0.98828125, 'number': 256} | {'precision': 0.9890510948905109, 'recall': 0.9926739926739927, 'f1': 0.9908592321755026, 'number': 273} | 0.9884 | 0.99 | 0.9892 | 0.9973 |
| 0.0098 | 13.0 | 143 | 0.0136 | {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71} | {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256} | {'precision': 0.9854014598540146, 'recall': 0.989010989010989, 'f1': 0.9872029250457038, 'number': 273} | 0.9867 | 0.9867 | 0.9867 | 0.9968 |
| 0.0066 | 14.0 | 154 | 0.0139 | {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71} | {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256} | {'precision': 0.9818181818181818, 'recall': 0.989010989010989, 'f1': 0.9854014598540145, 'number': 273} | 0.9850 | 0.9867 | 0.9858 | 0.9965 |
| 0.007 | 15.0 | 165 | 0.0137 | {'precision': 0.971830985915493, 'recall': 0.971830985915493, 'f1': 0.971830985915493, 'number': 71} | {'precision': 0.9921568627450981, 'recall': 0.98828125, 'f1': 0.990215264187867, 'number': 256} | {'precision': 0.9818181818181818, 'recall': 0.989010989010989, 'f1': 0.9854014598540145, 'number': 273} | 0.9850 | 0.9867 | 0.9858 | 0.9965 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
anzhzh/valadapt-llama-3.1-8B-it-greek-Q8_0-GGUF | anzhzh | 2025-03-06T11:31:34Z | 0 | 0 | peft | [
"peft",
"gguf",
"llama-cpp",
"gguf-my-lora",
"base_model:faridlazuarda/valadapt-llama-3.1-8B-it-greek",
"base_model:adapter:faridlazuarda/valadapt-llama-3.1-8B-it-greek",
"region:us"
] | null | 2025-03-06T11:31:13Z | ---
base_model: faridlazuarda/valadapt-llama-3.1-8B-it-greek
library_name: peft
tags:
- llama-cpp
- gguf-my-lora
---
# anzhzh/valadapt-llama-3.1-8B-it-greek-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`faridlazuarda/valadapt-llama-3.1-8B-it-greek`](https://huggingface.co/faridlazuarda/valadapt-llama-3.1-8B-it-greek) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/faridlazuarda/valadapt-llama-3.1-8B-it-greek) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora valadapt-llama-3.1-8B-it-greek-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora valadapt-llama-3.1-8B-it-greek-q8_0.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
mradermacher/OmniSQL-7B-GGUF | mradermacher | 2025-03-06T11:29:54Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"Text-to-SQL",
"SQL",
"NL2SQL",
"Text2SQL",
"en",
"dataset:seeklhy/SynSQL-2.5M",
"base_model:seeklhy/OmniSQL-7B",
"base_model:quantized:seeklhy/OmniSQL-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T11:03:08Z | ---
base_model: seeklhy/OmniSQL-7B
datasets:
- seeklhy/SynSQL-2.5M
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Text-to-SQL
- SQL
- NL2SQL
- Text2SQL
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/seeklhy/OmniSQL-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OmniSQL-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OmniSQL-7B-GGUF/resolve/main/OmniSQL-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
baby-dev/d3bef7a4-8127-4161-ba5e-90289b45acf2 | baby-dev | 2025-03-06T11:25:52Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"region:us"
] | null | 2025-03-06T11:25:30Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Qwen2.5-Math-1.5B
model-index:
- name: baby-dev/d3bef7a4-8127-4161-ba5e-90289b45acf2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/d3bef7a4-8127-4161-ba5e-90289b45acf2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Perrilla/Lornad | Perrilla | 2025-03-06T11:24:40Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-03-06T11:24:39Z | ---
license: bigscience-openrail-m
---
|
mradermacher/jais-family-30b-8k-chat-GGUF | mradermacher | 2025-03-06T11:23:40Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"jais-family",
"ar",
"en",
"base_model:inceptionai/jais-family-30b-8k-chat",
"base_model:quantized:inceptionai/jais-family-30b-8k-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T23:07:37Z | ---
base_model: inceptionai/jais-family-30b-8k-chat
language:
- ar
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
- jais-family
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/inceptionai/jais-family-30b-8k-chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/jais-family-30b-8k-chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.Q2_K.gguf) | Q2_K | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.Q3_K_S.gguf) | Q3_K_S | 23.9 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.Q3_K_M.gguf) | Q3_K_M | 25.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.Q3_K_L.gguf) | Q3_K_L | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.IQ4_XS.gguf) | IQ4_XS | 26.3 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.Q4_K_S.gguf) | Q4_K_S | 27.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.Q4_K_M.gguf) | Q4_K_M | 28.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.Q5_K_S.gguf) | Q5_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.Q5_K_M.gguf) | Q5_K_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-30b-8k-chat-GGUF/resolve/main/jais-family-30b-8k-chat.Q6_K.gguf) | Q6_K | 33.2 | very good quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF | mradermacher | 2025-03-06T11:23:32Z | 237 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksGraveyard/Pathos-Epsilon-LLaMa-70B",
"base_model:quantized:TareksGraveyard/Pathos-Epsilon-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-19T03:19:45Z | ---
base_model: TareksGraveyard/Pathos-Epsilon-LLaMa-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TareksGraveyard/Pathos-Epsilon-LLaMa-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Pathos-Epsilon-LLaMa-70B-GGUF/resolve/main/Pathos-Epsilon-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Basharat78/DeepSeek-R1-finetuned-2iq-375-records-1000-iterations-V1a_6Mar2025 | Basharat78 | 2025-03-06T11:21:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T11:21:37Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Basharat78
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
havinash-ai/7b961733-ce32-4b35-9cc9-be8c5d7c5f2f | havinash-ai | 2025-03-06T11:21:18Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B",
"region:us"
] | null | 2025-03-06T11:21:02Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Qwen2.5-Math-1.5B
model-index:
- name: havinash-ai/7b961733-ce32-4b35-9cc9-be8c5d7c5f2f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# havinash-ai/7b961733-ce32-4b35-9cc9-be8c5d7c5f2f
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF | mradermacher | 2025-03-06T11:20:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:linkonx/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1",
"base_model:quantized:linkonx/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T11:07:40Z | ---
base_model: linkonx/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/linkonx/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q2_K.gguf) | Q2_K | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q3_K_L.gguf) | Q3_K_L | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q5_K_M.gguf) | Q5_K_M | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q6_K.gguf) | Q6_K | 6.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.Q8_0.gguf) | Q8_0 | 8.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1-GGUF/resolve/main/EXAONE-3.5-7.8B-LinkOnX-Modeler-Code-v1.1.f16.gguf) | f16 | 15.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
tjohn327/cross-encoder-scion-l6-v2 | tjohn327 | 2025-03-06T11:19:13Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-06T00:19:50Z | ---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anzhzh/valadapt-llama-3.1-8B-it-chinese-Q8_0-GGUF | anzhzh | 2025-03-06T11:18:31Z | 0 | 0 | peft | [
"peft",
"gguf",
"llama-cpp",
"gguf-my-lora",
"base_model:faridlazuarda/valadapt-llama-3.1-8B-it-chinese",
"base_model:adapter:faridlazuarda/valadapt-llama-3.1-8B-it-chinese",
"region:us"
] | null | 2025-03-06T11:18:26Z | ---
base_model: faridlazuarda/valadapt-llama-3.1-8B-it-chinese
library_name: peft
tags:
- llama-cpp
- gguf-my-lora
---
# anzhzh/valadapt-llama-3.1-8B-it-chinese-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`faridlazuarda/valadapt-llama-3.1-8B-it-chinese`](https://huggingface.co/faridlazuarda/valadapt-llama-3.1-8B-it-chinese) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/faridlazuarda/valadapt-llama-3.1-8B-it-chinese) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora valadapt-llama-3.1-8B-it-chinese-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora valadapt-llama-3.1-8B-it-chinese-q8_0.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
ig1/QwQ-32B-FP8-Dynamic | ig1 | 2025-03-06T11:17:42Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"base_model:Qwen/QwQ-32B",
"base_model:quantized:Qwen/QwQ-32B",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2025-03-06T10:39:37Z | ---
license: apache-2.0
base_model:
- Qwen/QwQ-32B
---
Example run:
```bash
docker run --rm --runtime nvidia --gpus 'all' -e 'HF_TOKEN' -v '/root/.cache/huggingface:/root/.cache/huggingface' -p 127.0.0.1:8000:8000 "vllm/vllm-openai:v0.7.3" --model 'ig1/QwQ-32B-FP8-Dynamic' --served-model-name 'QwQ-32B' --enable-reasoning --reasoning-parser deepseek_r1 --override-generation-config '{"temperature":0.6,"top_p":0.95}'
``` |
TFOCUS/bruno_tester_18 | TFOCUS | 2025-03-06T11:14:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T11:11:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sheepwjy/model_gguf | sheepwjy | 2025-03-06T11:13:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T11:11:35Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sheepwjy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MayankTheLegend/Llama-3.2-3B-Instruct-chat-model-smartcoin-with-information-v3 | MayankTheLegend | 2025-03-06T11:11:14Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T08:08:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SimulaMet-HOST/HockeyOrient | SimulaMet-HOST | 2025-03-06T11:10:51Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-01-11T18:29:17Z | ---
license: mit
---
# HockeyOrient SqueezeNet Model
<div style="background-color:#f8f9fa; color:black; border-left: 6px solid #28a745; padding: 10px; margin: 10px 0;">
🔗 This model is trained on the <span style="color:red">HockeyOrient</span> dataset.
- 📊 Access the dataset used for training here: <a href="https://huggingface.co/datasets/SimulaMet-HOST/HockeyOrient" style="color:blue;">https://huggingface.co/datasets/SimulaMet-HOST/HockeyOrient</a>
- 🚀 Try the model in action with our interactive <span style="color:red">Hugging Face Space</span>: <a href="https://huggingface.co/spaces/SimulaMet-HOST/HockeyOrient" style="color:blue;">https://huggingface.co/spaces/SimulaMet-HOST/HockeyOrient</a>
</div>
## Overview
This model is trained for ice hockey player orientation classification, classifying cropped player images into one of eight orientations: Top, Top-Right, Right, Bottom-Right, Bottom, Bottom-Left, Left, and Top-Left. It is based on the SqueezeNet architecture and achieves an F1 score of **75%**.
## Model Details
- **Architecture**: SqueezeNet (modified for 8-class classification).
- **Training Configuration**:
- Learning rate: 1e-4
- Batch size: 24
- Epochs: 300
- Weight decay: 1e-4
- Dropout: 0.3
- Early stopping: patience = 50
- Augmentations: Color jitter (no rotation)
- **Performance**:
- Accuracy: ~75%
- F1 Score: ~75%
## Usage
1. Extract frames from a video using OpenCV.
2. Detect player bounding boxes with a YOLO model.
3. Crop player images, resize them to 224x224, and preprocess with the given PyTorch transformations:
- Resize to (224, 224)
- Normalize with mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].
4. Classify the direction of each cropped player image using the SqueezeNet model:
```python
with torch.no_grad():
output = model(image_tensor)
direction_class = torch.argmax(output, dim=1).item()
<div style="background-color:#e7f3ff; color:black; border-left: 6px solid #0056b3; padding: 12px; margin: 10px 0;">
<span style="color:black; font-weight:bold;">📩 For any questions regarding this project, or to discuss potential collaboration and joint research opportunities, please contact:</span>
<ul style="color:black;">
<li><span style="font-weight:bold; color:black;">Mehdi Houshmand</span>: <a href="mailto:[email protected]" style="color:blue; text-decoration:none;">[email protected]</a></li>
<li><span style="font-weight:bold; color:black;">Cise Midoglu</span>: <a href="mailto:[email protected]" style="color:blue; text-decoration:none;">[email protected]</a></li>
<li><span style="font-weight:bold; color:black;">Pål Halvorsen</span>: <a href="mailto:[email protected]" style="color:blue; text-decoration:none;">[email protected]</a></li>
</ul>
</div>
|
mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF | mradermacher | 2025-03-06T11:07:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.2_1b_OrcaSun_V1",
"base_model:quantized:Nexesenex/Llama_3.2_1b_OrcaSun_V1",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T11:01:49Z | ---
base_model: Nexesenex/Llama_3.2_1b_OrcaSun_V1
language:
- en
library_name: transformers
license: llama3.2
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nexesenex/Llama_3.2_1b_OrcaSun_V1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q6_K.gguf) | Q6_K | 1.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.2_1b_OrcaSun_V1-GGUF/resolve/main/Llama_3.2_1b_OrcaSun_V1.f16.gguf) | f16 | 3.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
youralien/roberta-Empathy-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current | youralien | 2025-03-06T11:05:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-06T07:29:19Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-Empathy-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-Empathy-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4517
- Accuracy: 0.8126
- Precision: 0.6552
- Recall: 0.8531
- F1: 0.7411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.4215053849812563e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6796 | 1.0 | 123 | 0.5135 | 0.7330 | 0.5527 | 0.7918 | 0.6510 |
| 0.5601 | 2.0 | 246 | 0.3676 | 0.7869 | 0.6287 | 0.7878 | 0.6993 |
| 0.4963 | 3.0 | 369 | 0.3964 | 0.8216 | 0.6893 | 0.7878 | 0.7352 |
| 0.4666 | 4.0 | 492 | 0.5450 | 0.7433 | 0.5547 | 0.9306 | 0.6951 |
| 0.44 | 5.0 | 615 | 0.2741 | 0.8216 | 0.7154 | 0.7184 | 0.7169 |
| 0.3848 | 6.0 | 738 | 0.3214 | 0.8177 | 0.6846 | 0.7796 | 0.7290 |
| 0.3553 | 7.0 | 861 | 0.3163 | 0.8139 | 0.6825 | 0.7633 | 0.7206 |
| 0.3197 | 8.0 | 984 | 0.4856 | 0.7882 | 0.6136 | 0.8816 | 0.7236 |
| 0.2825 | 9.0 | 1107 | 0.4515 | 0.7959 | 0.6303 | 0.8490 | 0.7235 |
| 0.2423 | 10.0 | 1230 | 0.4517 | 0.8126 | 0.6552 | 0.8531 | 0.7411 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0
|
NiloofarMomeni/distilhubert-finetuned-strain | NiloofarMomeni | 2025-03-06T11:05:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:PQVD",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-03-06T10:57:50Z | ---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- PQVD
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-strain
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: PQVD
type: PQVD
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8021978021978022
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-strain
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the PQVD dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8113
- Accuracy: 0.8022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6067 | 1.0 | 92 | 0.5729 | 0.7473 |
| 0.6384 | 2.0 | 184 | 0.5746 | 0.7692 |
| 0.522 | 3.0 | 276 | 0.5744 | 0.7582 |
| 0.4009 | 4.0 | 368 | 0.6686 | 0.7473 |
| 0.1977 | 5.0 | 460 | 0.5451 | 0.7802 |
| 0.4943 | 6.0 | 552 | 0.6118 | 0.7802 |
| 0.2251 | 7.0 | 644 | 0.5647 | 0.7912 |
| 0.0488 | 8.0 | 736 | 0.6797 | 0.8352 |
| 0.2085 | 9.0 | 828 | 0.8064 | 0.7912 |
| 0.1512 | 10.0 | 920 | 0.8113 | 0.8022 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
TFOCUS/bruno_tester_17 | TFOCUS | 2025-03-06T11:04:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T11:01:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF | mradermacher | 2025-03-06T11:01:52Z | 79 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"ja",
"en",
"base_model:AXCXEPT/EZO-Qwen2.5-72B-Instruct",
"base_model:quantized:AXCXEPT/EZO-Qwen2.5-72B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-02T07:56:25Z | ---
base_model: AXCXEPT/EZO-Qwen2.5-72B-Instruct
language:
- ja
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AXCXEPT/EZO-Qwen2.5-72B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/EZO-Qwen2.5-72B-Instruct-GGUF/resolve/main/EZO-Qwen2.5-72B-Instruct.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
celinah/dummy-xet-testing | celinah | 2025-03-06T11:01:37Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T10:18:01Z | ---
library_name: transformers
tags: []
---
# Model Card of a dummy model for xet testing
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yesuel/sante250118 | yesuel | 2025-03-06T11:01:11Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-01-18T04:54:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Mohana0304/sd-class-butterflies-32 | Mohana0304 | 2025-03-06T11:00:10Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-03-06T11:00:02Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Mohana0304/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Abinanthan2004/sd-class-butterflies-32 | Abinanthan2004 | 2025-03-06T10:59:42Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-03-06T10:59:27Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Abinanthan2004/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
abdullahhatem/a2c-PandaReachDense-v3 | abdullahhatem | 2025-03-06T10:59:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-03-06T09:25:48Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.26 +/- 0.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/Crazy-Qwen2-7b-GGUF | mradermacher | 2025-03-06T10:59:24Z | 304 | 2 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:bunnycore/Crazy-Qwen2-7b",
"base_model:quantized:bunnycore/Crazy-Qwen2-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-03T05:49:47Z | ---
base_model: bunnycore/Crazy-Qwen2-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Crazy-Qwen2-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Crazy-Qwen2-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Crazy-Qwen2-7b-GGUF/resolve/main/Crazy-Qwen2-7b.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
leohengwang/leomodel | leohengwang | 2025-03-06T10:59:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T10:59:22Z | ---
license: apache-2.0
---
|
Akshit-77/llama-3.2-3b-chatbot | Akshit-77 | 2025-03-06T10:59:07Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-11-02T03:45:55Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
- generated_from_trainer
model-index:
- name: llama-3.2-3b-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3.2-3b-chatbot
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8392 | 0.9292 | 200 | 1.6995 |
| 1.445 | 1.8583 | 400 | 1.5649 |
| 1.1078 | 2.7875 | 600 | 1.5232 |
| 0.819 | 3.7166 | 800 | 1.5380 |
| 0.5895 | 4.6458 | 1000 | 1.6208 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
mradermacher/Kiwi-1.0-0.7B-32k-GGUF | mradermacher | 2025-03-06T10:58:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:wikimedia/wikipedia",
"base_model:empirischtech/Kiwi-1.0-0.7B-32k",
"base_model:quantized:empirischtech/Kiwi-1.0-0.7B-32k",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T10:56:02Z | ---
base_model: empirischtech/Kiwi-1.0-0.7B-32k
datasets:
- wikimedia/wikipedia
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/empirischtech/Kiwi-1.0-0.7B-32k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q3_K_S.gguf) | Q3_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.IQ4_XS.gguf) | IQ4_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q3_K_L.gguf) | Q3_K_L | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q4_K_S.gguf) | Q4_K_S | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q4_K_M.gguf) | Q4_K_M | 0.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q5_K_S.gguf) | Q5_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q5_K_M.gguf) | Q5_K_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q6_K.gguf) | Q6_K | 0.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.Q8_0.gguf) | Q8_0 | 0.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kiwi-1.0-0.7B-32k-GGUF/resolve/main/Kiwi-1.0-0.7B-32k.f16.gguf) | f16 | 1.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kavinsankar/sd-class-butterflies-32 | kavinsankar | 2025-03-06T10:58:50Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-03-06T10:58:32Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('kavinsankar/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
youralien/roberta-cbl-Reflections-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current | youralien | 2025-03-06T10:57:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-06T10:18:20Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-cbl-Reflections-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-cbl-Reflections-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6912
- Accuracy: 0.8883
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0009958025071527669
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.7589 | 1.0 | 243 | 0.8676 | 0.1117 | 0.1117 | 1.0 | 0.2009 |
| 0.7949 | 2.0 | 486 | 0.9892 | 0.1117 | 0.1117 | 1.0 | 0.2009 |
| 0.7443 | 3.0 | 729 | 0.6907 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.765 | 4.0 | 972 | 0.7383 | 0.1117 | 0.1117 | 1.0 | 0.2009 |
| 0.7549 | 5.0 | 1215 | 0.6875 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.7383 | 6.0 | 1458 | 0.6875 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.748 | 7.0 | 1701 | 0.6875 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.7502 | 8.0 | 1944 | 0.7498 | 0.1117 | 0.1117 | 1.0 | 0.2009 |
| 0.7406 | 9.0 | 2187 | 0.6895 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.725 | 10.0 | 2430 | 0.6979 | 0.1117 | 0.1117 | 1.0 | 0.2009 |
| 0.7159 | 11.0 | 2673 | 0.7053 | 0.1117 | 0.1117 | 1.0 | 0.2009 |
| 0.7222 | 12.0 | 2916 | 0.6876 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.7139 | 13.0 | 3159 | 0.7088 | 0.1117 | 0.1117 | 1.0 | 0.2009 |
| 0.7053 | 14.0 | 3402 | 0.6971 | 0.1117 | 0.1117 | 1.0 | 0.2009 |
| 0.7206 | 15.0 | 3645 | 0.6880 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.7108 | 16.0 | 3888 | 0.6886 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.7075 | 17.0 | 4131 | 0.6954 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.7035 | 18.0 | 4374 | 0.6885 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.696 | 19.0 | 4617 | 0.6909 | 0.8883 | 0.0 | 0.0 | 0.0 |
| 0.6996 | 20.0 | 4860 | 0.6912 | 0.8883 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0
|
texanrangee/9b77f65a-3f42-4c69-b359-5f3ff6fc2f5a | texanrangee | 2025-03-06T10:56:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T07:03:16Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/jais-family-6p7b-i1-GGUF | mradermacher | 2025-03-06T10:56:03Z | 424 | 0 | transformers | [
"transformers",
"gguf",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"jais-family",
"ar",
"en",
"base_model:inceptionai/jais-family-6p7b",
"base_model:quantized:inceptionai/jais-family-6p7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-03-04T21:28:02Z | ---
base_model: inceptionai/jais-family-6p7b
language:
- ar
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
- jais-family
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/inceptionai/jais-family-6p7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/jais-family-6p7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ1_S.gguf) | i1-IQ1_S | 4.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ1_M.gguf) | i1-IQ1_M | 4.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ2_S.gguf) | i1-IQ2_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.0 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ2_M.gguf) | i1-IQ2_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q2_K.gguf) | i1-Q2_K | 5.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ3_S.gguf) | i1-IQ3_S | 5.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ3_M.gguf) | i1-IQ3_M | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-6p7b-i1-GGUF/resolve/main/jais-family-6p7b.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
texanrangee/d5e1408a-ac36-4191-9566-6a7492e08d04 | texanrangee | 2025-03-06T10:55:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T08:31:08Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nerva1228/lingluanya | Nerva1228 | 2025-03-06T10:55:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-06T10:55:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: lingluanya
---
# Lingluanya
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `lingluanya` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/lingluanya', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
sheepwjy/model | sheepwjy | 2025-03-06T10:55:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T10:51:14Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sheepwjy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Selvaganapathi14/sd-class-butterflies-32 | Selvaganapathi14 | 2025-03-06T10:54:59Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-03-06T10:53:17Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Selvaganapathi14/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
TFOCUS/bruno_tester_16 | TFOCUS | 2025-03-06T10:54:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T10:51:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adamhao123/Qwen2.5-1.5B-Open-R1-Distill-sft | adamhao123 | 2025-03-06T10:53:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/Bespoke-Stratos-17k",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T06:54:28Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: HuggingFaceH4/Bespoke-Stratos-17k
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill-sft
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill-sft
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [HuggingFaceH4/Bespoke-Stratos-17k](https://huggingface.co/datasets/HuggingFaceH4/Bespoke-Stratos-17k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="adamhao123/Qwen2.5-1.5B-Open-R1-Distill-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/songhao9021-9uest/huggingface/runs/e01udmry)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.50.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.3.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
PharynxAI/Synctalk_finetune_testing | PharynxAI | 2025-03-06T10:52:01Z | 151 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T08:31:25Z | ---
base_model: base_model/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** PharynxAI
- **License:** apache-2.0
- **Finetuned from model :** base_model/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gopalakrishnanb/sd-class-butterflies-32 | gopalakrishnanb | 2025-03-06T10:51:24Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-03-06T10:51:13Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('gopalakrishnanb/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Kiruba-2004/sd-class-butterflies-32 | Kiruba-2004 | 2025-03-06T10:51:11Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2025-03-06T10:48:56Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Kiruba-2004/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
ryanhan813/whisper-large-v3-turbo-korean-lqa-number-data-5500 | ryanhan813 | 2025-03-06T10:50:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-03-06T10:45:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LA1512/whisper_tiny_add_data | LA1512 | 2025-03-06T10:50:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-03-06T10:49:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/chatglm3-6b-128k-i1-GGUF | mradermacher | 2025-03-06T10:49:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"glm",
"chatglm",
"thudm",
"zh",
"en",
"base_model:THUDM/chatglm3-6b-128k",
"base_model:quantized:THUDM/chatglm3-6b-128k",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-06T10:18:23Z | ---
base_model: THUDM/chatglm3-6b-128k
language:
- zh
- en
library_name: transformers
quantized_by: mradermacher
tags:
- glm
- chatglm
- thudm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/THUDM/chatglm3-6b-128k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ1_M.gguf) | i1-IQ1_M | 2.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ2_M.gguf) | i1-IQ2_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ3_S.gguf) | i1-IQ3_S | 3.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ3_M.gguf) | i1-IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q4_0.gguf) | i1-Q4_0 | 3.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q4_1.gguf) | i1-Q4_1 | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF/resolve/main/chatglm3-6b-128k.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
aimakingg/brand-tabiat | aimakingg | 2025-03-06T10:48:48Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-06T10:17:03Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TABIATT12
---
# Brand Tabiat
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TABIATT12` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('aimakingg/brand-tabiat', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/jais-family-1p3b-i1-GGUF | mradermacher | 2025-03-06T10:46:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"jais-family",
"ar",
"en",
"base_model:inceptionai/jais-family-1p3b",
"base_model:quantized:inceptionai/jais-family-1p3b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-03-06T10:16:49Z | ---
base_model: inceptionai/jais-family-1p3b
language:
- ar
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
- jais-family
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/inceptionai/jais-family-1p3b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/jais-family-1p3b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF/resolve/main/jais-family-1p3b.i1-Q6_K.gguf) | i1-Q6_K | 1.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/chatglm3-6b-128k-GGUF | mradermacher | 2025-03-06T10:46:25Z | 202 | 0 | transformers | [
"transformers",
"gguf",
"glm",
"chatglm",
"thudm",
"zh",
"en",
"base_model:THUDM/chatglm3-6b-128k",
"base_model:quantized:THUDM/chatglm3-6b-128k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T05:31:07Z | ---
base_model: THUDM/chatglm3-6b-128k
language:
- zh
- en
library_name: transformers
quantized_by: mradermacher
tags:
- glm
- chatglm
- thudm
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/THUDM/chatglm3-6b-128k
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/chatglm3-6b-128k-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.IQ4_XS.gguf) | IQ4_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q3_K_L.gguf) | Q3_K_L | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q4_K_S.gguf) | Q4_K_S | 3.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q5_K_S.gguf) | Q5_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q5_K_M.gguf) | Q5_K_M | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.Q8_0.gguf) | Q8_0 | 6.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/chatglm3-6b-128k-GGUF/resolve/main/chatglm3-6b-128k.f16.gguf) | f16 | 12.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Hachipo/Llama3-8B_MIFT-En_opencoder-edu_CoTRFT_1000 | Hachipo | 2025-03-06T10:46:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T10:42:38Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jiazhengli/rolemrc-sft | jiazhengli | 2025-03-06T10:45:21Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-03-06T09:48:27Z | Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/e1/6f/e16fd91fb17b496931ceb8c75fbf90d95c1e8acd8fcb57ebec00f7b396a93577/de53765e121a626be310e0bd84e7a5b0b6df7170d109eda19db7f83095e9401c?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1741267647&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0MTI2NzY0N319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2UxLzZmL2UxNmZkOTFmYjE3YjQ5NjkzMWNlYjhjNzVmYmY5MGQ5NWMxZThhY2Q4ZmNiNTdlYmVjMDBmN2IzOTZhOTM1NzcvZGU1Mzc2NWUxMjFhNjI2YmUzMTBlMGJkODRlN2E1YjBiNmRmNzE3MGQxMDllZGExOWRiN2Y4MzA5NWU5NDAxYz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=IXsJ0mgm1TUTmLppU-XqWICpqJCegN47v-egu0Ty8DiVLdZDoCuKbHKLZBxw86GJArLXtlR7uVzR6gdDB4HeKxFaqSM9df0ZUqDG4ll0n2oWB49rhg0aAoIFn9xPRXJQ1VFfJN5OOqR8C%7EGOCYdfPnYekqmOOs16TsC1zStdVbvlr-VupmaZX6DaIChviozHHJd-N2q3qDTUNtXJUKkjEfc3X5rLG6dM1uw8Mf-vIryTwandQMM4taQ0KG3LqsfAROZ%7E%7Evq0OzERAmKOJeL1XaoB7cIjRbFJ62A266wdBXbfJpsXfLR3izPwDHFZ3eGknD0y5M4yvvaAd-LPwTEsyA__&Key-Pair-Id=K24J24Z295AEI9 |
JeffreyWong/roberta-base-relu-mrpc | JeffreyWong | 2025-03-06T10:44:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:JeremiahZ/roberta-base-mrpc",
"base_model:finetune:JeremiahZ/roberta-base-mrpc",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-06T10:13:55Z | ---
library_name: transformers
language:
- en
license: mit
base_model: JeremiahZ/roberta-base-mrpc
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: roberta-base-relu-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-relu-mrpc
This model is a fine-tuned version of [JeremiahZ/roberta-base-mrpc](https://huggingface.co/JeremiahZ/roberta-base-mrpc) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8321
- eval_model_preparation_time: 0.0023
- eval_accuracy: 0.8824
- eval_f1: 0.9161
- eval_combined_score: 0.8992
- eval_runtime: 3.9409
- eval_samples_per_second: 103.529
- eval_steps_per_second: 25.882
- step: 0
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-5, 2e-5, 3e-5
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- num_epochs: 10
The best model was selected based on the highest accuracy, which is the key evaluation metric for this task.
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
error577/8258ff8f-2881-4270-8f2c-ea10e5121124 | error577 | 2025-03-06T10:42:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:dltjdgh0928/test_instruction",
"base_model:adapter:dltjdgh0928/test_instruction",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-06T07:01:19Z | ---
library_name: peft
license: apache-2.0
base_model: dltjdgh0928/test_instruction
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8258ff8f-2881-4270-8f2c-ea10e5121124
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: qlora
auto_resume_from_checkpoints: true
base_model: dltjdgh0928/test_instruction
bf16: auto
chat_template: llama3
dataset_prepared_path: null
dataset_processes: 6
datasets:
- data_files:
- aa50233b0137497d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aa50233b0137497d_train_data.json
type:
field_instruction: content
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 200
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: error577/8258ff8f-2881-4270-8f2c-ea10e5121124
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: null
micro_batch_size: 3
mlflow_experiment_name: /tmp/aa50233b0137497d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch_4bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 200
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0005
wandb_entity: null
wandb_mode: online
wandb_name: 0c788dab-2672-467c-950d-afa94bc890f0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0c788dab-2672-467c-950d-afa94bc890f0
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8258ff8f-2881-4270-8f2c-ea10e5121124
This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Use OptimizerNames.ADAMW_TORCH_4BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.6902 | 0.0000 | 1 | 2.5815 |
| 6.9058 | 0.0048 | 200 | 1.3610 |
| 5.3851 | 0.0096 | 400 | 1.3072 |
| 4.642 | 0.0145 | 600 | 1.3468 |
| 4.8068 | 0.0193 | 800 | 1.3304 |
| 6.5523 | 0.0241 | 1000 | 1.3624 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
visdata/wld16 | visdata | 2025-03-06T10:40:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T10:35:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nerva1228/guoxiaoni | Nerva1228 | 2025-03-06T10:39:38Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-06T10:39:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: guoxiaoni
---
# Guoxiaoni
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `guoxiaoni` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/guoxiaoni', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
binbin123/whisper-small-dj-checkpoint-500-vscode | binbin123 | 2025-03-06T10:39:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T10:39:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
swardiantara/vector-ordinal-embedding | swardiantara | 2025-03-06T10:38:49Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-03-04T14:51:11Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# vector-euclidean
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('vector-euclidean')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=vector-euclidean)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3415 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`utils.losses.OrdinalContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1024,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
HueyWoo/l-3.1-8B-ko-gguf | HueyWoo | 2025-03-06T10:37:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:sh2orc/Llama-3.1-Korean-8B-Instruct",
"base_model:quantized:sh2orc/Llama-3.1-Korean-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T10:28:59Z | ---
base_model: sh2orc/Llama-3.1-Korean-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** HueyWoo
- **License:** apache-2.0
- **Finetuned from model :** sh2orc/Llama-3.1-Korean-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/instruction_filtering_scale_up_code_base_embedding_filter_mean_per_domain_8K | mlfoundations-dev | 2025-03-06T10:37:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T03:58:24Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: instruction_filtering_scale_up_code_base_embedding_filter_mean_per_domain_8K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# instruction_filtering_scale_up_code_base_embedding_filter_mean_per_domain_8K
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/instruction_filtering_scale_up_code_base_embedding_filter_mean_per_domain_8K dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
HackerCIS/Pong_BrainAI_EXAONE-3.5-7.8B-Instruct | HackerCIS | 2025-03-06T10:36:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T03:46:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
visdata/wld12 | visdata | 2025-03-06T10:36:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T10:31:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mmaluchnick/britney-spears-bomt-era-flux-model | mmaluchnick | 2025-03-06T10:36:21Z | 23 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-26T19:44:13Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: x-25VOcFF630-V_A7_tET_957cbe93fae44f0d93938104dc1d7f31.png
- text: '-'
output:
url: bomt4.png
- text: '-'
output:
url: bomt5.png
- text: '-'
output:
url: bomt6.png
- text: '-'
output:
url: bomt2.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: bomtera
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Britney Spears "...Baby One More Time" Era Flux Model
<Gallery />
## Model description
Britney Jean Spears, born December 2, 1981, in McComb, MS, is an American recording artist, actress, author, and businesswoman. Oft referred to as the "Princess of Pop,” she is credited with the revival of pop music during the late 1990s and early 2000s, and is recognized as an icon. Spears has sold an estimated 150 million records worldwide, making her one of the world's best-selling music artists. She ranks as the best-selling female albums artist of the 2000s, the eighth-biggest artist overall of the 2000s, and the fourth best-selling female albums artist and tenth best-selling digital artist in history. Spears has earned countless awards and accolades, including a Grammy Award, 15 Guinness World Records, Billboard’s Millennium Award, GLAAD’s Vanguard Award, the inaugural Radio Disney Icon Award, MTV’s Michael Jackson Video Vanguard Award, and a star on the Hollywood Walk of Fame. In 2020, Rolling Stone named her song “…Baby One More Time” the best debut single of all time. After Spears won a readers' poll, Time selected her as one of its 100 Most Influential People in 2021.
Spears made her local stage debut at age 5, singing “What Child Is This?” at her kindergarten graduation. Throughout her childhood, Spears took voice, dance, and gymnastic lessons, while competing in pageants and talent shows. For a short time, she trained at a camp run by famed Olympics gymnastics coach Bela Karolyi. In 1993, alongside other future stars Christina Aguilera, Justin Timberlake, and Ryan Gosling, Spears was cast on Disney's “The New Mickey Mouse Club." She remained on the series until its cancellation two years later. Spears signed a record deal with Jive Records in 1997, when she was 15. Her first single, “…Baby One More Time,” was released in October 1998. Buoyed by its controversial music video, the song reached No. 1 in 23 countries, propelling Spears to international superstardom and ushering in a new era of pop music. Spears’ debut album, also titled “…Baby One More Time," arrived in January 1999. It debuted at No. 1 in the US, making Spears the first artist in history to have both the No. 1 song and album in the same week. In total, "...Baby One More Time" sold over 25 million copies worldwide.
Spears' sophomore album, "Oops!... I Did It Again" (2000), sold 1.3 million copies in its first week alone and held the record for the fastest-selling album by a female artist in the US for 15 years. Spears adopted a more mature sound and style for her third and fourth albums, 2001's "Britney" and 2003's "In the Zone." Despite backlash over Spears’ increasingly provocative image, both albums sold over 10 million copies worldwide.
Spears made her big-screen debut in the motion picture “Crossroads" (2002), written by Shonda Rhimes and co-starring Dan Ackroyd, Kim Cattrall, Zoe Saldana, and Taryn Manning. She has also guest-starred on “Glee,” “How I Met Your Mother,” “Will & Grace,” “Sabrina, the Teenage Witch,” and “Jane the Virgin,” and has twice hosted “Saturday Night Live” and appeared as musical guest three times.
In 2004, Spears partnered with Elizabeth Arden to launch her first perfume, Curious. Spears currently has over 30 fragrances to her name, available in 85 countries, with sales exceeding $1.5 billion.
Spears served as executive producer of her fifth album, “Blackout" (2007). Though it initially received lukewarm reviews, “Blackout” has since been recognized as one of the most influential albums of its time, and is widely considered Spears' best work. In 2008, after a bout of personal struggles, Spears was placed in a conservatorship that stripped her of all personal autonomy and put her estranged father in control of her person and estate. (The conservatorship remained in place until November 2021. Spears has described the abuse, isolation, and forced labor that she endured while under her father’s control.) Soon after the conservatorship was implemented, Spears returned to work, releasing the chart-topping albums “Circus” (2008) and “Femme Fatale” (2011), both of which were supported by extensive worldwide concert tours.
In 2012, Spears appeared as a judge on "X-Factor USA," becoming, at the time, the highest-paid reality TV judge in history. That same year, Spears was featured on will.i.am's “Scream & Shout," which peaked at No. 3 on the Hot 100 and was the first No. 1 song on Billboard's new Dance/Electronic Songs chart. will.i.am later executive-produced Spears’ eighth album, “Britney Jean" (2013). In December 2013, Spears began a Las Vegas concert residency, “Britney: Piece of Me,” at Planet Hollywood Resort & Casino. The show was initially scheduled to run for two years, but was extended several times due to its enduring popularity. It ultimately concluded in December 2017. Spears and her residency revitalized the Vegas strip, and the show won numerous awards during its run, including Best Show in Vegas and Best Bachelorette Show in Vegas. In 2015, Spears released the single “Pretty Girls" with Iggy Azalea, and contributed vocals to Giorgio Moroder's “Tom’s Diner.” Spears’ ninth album, “Glory,” arrived in August 2016, preceded by the Top 20 hit "Make Me..." featuring G-Eazy. Spears later took her Vegas show on the road throughout 2017 and 2018, with dates in some counties that she had never toured previously. "Glory" was re-released in 2020 with updated cover art and additional songs following a successful fan campaign to push “Mood Ring” - originally a Japan-only bonus track - to No. 1 on iTunes.
Spears teamed up with Elton John in 2022 to release the single "Hold Me Closer," which debuted at No. 6 on the Hot 100 and became Spears’ highest-charting single in a decade. Also in 2022, publishing house Simon & Schuster signed Spears to a book deal worth a staggering $15 million. Spears’ hotly-anticipated memoir, “The Woman in Me,” hit shelves in October 2023. In its first week, it sold 1.1 million copies in the US and 2.4 million copies worldwide, immediately becoming a New York Times #1 bestseller, as well as the fastest-selling title in Simon & Schuster’s history. A film adaptation of Spears’ memoir, to be helmed by "Wicked" director Jon Chu, was announced in 2024.
After reports surfaced that Spears was working on a new album, she clarified via Instagram that she has no plans to resume her own singing career, though she will continue ghostwriting songs for other artists.
## Trigger words
You should use `bomtera` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](https://huggingface.co/mmaluchnick/britney-spears-bomt-era-flux-model/tree/main) them in the Files & versions tab. |
visdata/wld19 | visdata | 2025-03-06T10:33:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T10:27:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TFOCUS/bruno_tester_14 | TFOCUS | 2025-03-06T10:32:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T10:29:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sheepwjy/lora_model | sheepwjy | 2025-03-06T10:32:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T10:28:18Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sheepwjy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nice2mitya/a_816439710 | nice2mitya | 2025-03-06T10:31:17Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-03-06T10:04:30Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
XeTute/SaplingDream_V1-0.5B | XeTute | 2025-03-06T10:29:45Z | 61 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"reasoning",
"conversational",
"thinking",
"tiny",
"small",
"dataset:open-thoughts/OpenThoughts-114k",
"dataset:prithivMLmods/Deepthink-Reasoning-Ins",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-20T20:34:18Z | ---
license: apache-2.0
datasets:
- open-thoughts/OpenThoughts-114k
- prithivMLmods/Deepthink-Reasoning-Ins
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
tags:
- reasoning
- conversational
- thinking
- tiny
- small
library_name: transformers
---
<div style="max-width: 800px; margin: 2rem auto; padding: 2rem; background: #0a0a0a; font-family: 'Courier New', monospace; color: #00ff88; border: 3px dashed #00ffff; box-shadow: 0 0 15px rgba(0, 255, 136, 0.3);">
<h1 style="font-size: 2.5rem; color: #ff00ff; text-shadow: 0 0 10px #ff00ff; border-bottom: 3px solid #00ffff; padding-bottom: 0.5rem; text-transform: uppercase;">Sapling Dream V1</h1>
<div style="margin: 2rem 0; padding: 1.5rem; border: 2px solid #00ff88; background: rgba(0, 0, 0, 0.3); max-height: fit-content;">
<p style="line-height: 1.6;">
Introducing <strong style="color: #00ffff; text-shadow: 0 0 8px #00ffff;">SaplingDream</strong>,
a compact GPT model with 0.5 billion parameters, based on the
<a href="https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct" style="color: #ff00ff; text-decoration: underline; text-decoration-color: #00ffff;">Qwen/Qwen2.5-0.5B-Instruct</a>
architecture. This model has been fine-tuned on a RTX4060 8GB for a bit over two days on ~0.3B tokens...
</p>
</div>
<div style="margin: 2rem 0; padding: 1.5rem; border-left: 4px solid #ff00ff; max-height: fit-content;">
<h2 style="color: #00ffff; font-size: 1.8rem; margin-bottom: 1rem;">Datasets & Resources</h2>
<ul style="list-style: none; padding-left: 1rem;">
<li style="margin-bottom: 0.8rem;">🔗 <a href="https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k" style="color: #00ff88; text-decoration: none;">OpenThoughts-114k Dataset</a></li>
<li style="margin-bottom: 0.8rem;">⚙️ <a href="https://huggingface.co/mradermacher/SaplingDream_V1-0.5B-GGUF" style="color: #00ff88; text-decoration: none;">GGUF Version</a></li>
</ul>
</div>
<div style="text-align: center; margin: 2rem 0; border: 2px solid #00ffff; padding: 1rem; max-height: fit-content;">
<p>Evaluation Loss Chart</p>
<img src="https://huggingface.co/XeTute/SaplingDream_V1-0.5B/resolve/main/training_eval_loss.png"
alt="Evaluation Loss Chart"
style="max-width: 100%; border: 1px solid #ff00ff; box-shadow: 0 0 15px rgba(255, 0, 255, 0.4);">
</div>
<div style="background: linear-gradient(45deg, #002200, #000033); padding: 2rem; text-align: center; border: 1px solid #00ff88; max-height: fit-content;">
<h3 style="color: #ff00ff; font-size: 1.5rem; margin-bottom: 1.5rem;">Our Apps & Socials</h3>
<p style="word-spacing: 2rem;">
<a href="https://xetute.com/" style="color: #00ffff; text-decoration: underline;"><strong>Chat Assistant</strong></a> |
<a href="https://ko-fi.com/XeTute" style="color: #00ffff; text-decoration: underline;">Support Us</a> |
<a href="https://github.com/XeTute" style="color: #00ffff; text-decoration: underline;">GitHub</a>
</p>
</div>
<footer style="margin-top: 3rem; text-align: center; color: #ff5500; max-height: fit-content;">
<p style="font-size: 1.2rem; text-shadow: 0 0 8px #ff5500;">
Long live the Islamic Republic of Pakistan; Glory to the Islamic Republic of Pakistan 🇵🇰
</p>
<img src="https://upload.wikimedia.org/wikipedia/commons/3/32/Flag_of_Pakistan.svg"
alt="Pakistan Flag"
style="margin-top: 1rem; border: 2px solid #00ffff; padding: 3px;">
</footer>
</div> |
Lettria/just_test-trial-3 | Lettria | 2025-03-06T10:28:31Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"onnx",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2467",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-base",
"base_model:quantized:intfloat/multilingual-e5-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-03-06T10:27:35Z | ---
base_model: intfloat/multilingual-e5-base
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2467
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Date de début: non précisée
Date de fin (clôture): non précisée
Date de début de la future campagne: non précisée'
sentences:
- '''Date de début de la future campagne'':concept|EST|''non précisée'':__inferred__'
- '''rencontres'':événement|impliquent|''professionnels bénévoles'':groupe'
- '''petites entreprises innovantes franciliennes'':bénéficiaire|INCLUT|''Professionnel
- PME < 250'':bénéficiaire'
- source_sentence: 'Date de début: non précisée
Date de fin (clôture): non précisée
Date de début de la future campagne: non précisée'
sentences:
- '''Date de début'':concept|EST|''Lundi 14 Octobre 2024'':date'
- '''communes d''Île-de-France'':entité|BÉNÉFICIAIRE|''aide'':concept'
- '''Date de fin'':concept|EST|''non précisée'':__inferred__'
- source_sentence: 'Date de début: Lundi 7 Octobre 2024, à 00:00:00 (UTC+0200)
Date de fin (clôture): Jeudi 14 Novembre 2024, à 23:59:00 (UTC+0100)
Date de début de la future campagne: non précisée'
sentences:
- '''mesdemarches.iledefrance.fr'':plateforme|ACCEPTE_DEMANDE|''Collectivité ou
institution - Communes de 2000 à 10 000 hab'':organisation'
- '''plateforme des aides régionales'':plateforme|CIBLE|''Collectivité ou institution
- EPT / Métropole du Grand Paris'':organisation'
- '''début'':événement|DUREE|''fin'':événement'
- source_sentence: 'Procédures et démarches: Les demandes d’aide devront être déposées
sur mesdemarches.iledefrance.fr, la plateforme des aides régionales
Bénéficiaires: Professionnel - Culture, Professionnel - Patrimoine, Association
- Fondation, Association - Régie par la loi de 1901, Collectivité ou institution
- Communes de 10 000 à 20 000 hab, Collectivité ou institution - Communes de 2000
à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité
ou institution - Communes de > 20 000 hab, Collectivité ou institution - Département,
Collectivité ou institution - EPCI, Collectivité ou institution - Autre (GIP,
copropriété, EPA...)
Précision sure les bénéficiaires: Collectivités territoriales (communes, départements,
EPCI),Aménageurs mandatés par des collectivités territoriales,Personnes morales
de droit privé ou public ayant au moins 1 an d''existence.'
sentences:
- '''Dossier de candidature'':document|DÉPOSER_SUR|''mesdemarches.iledefrance.fr'':plateforme'
- '''mesdemarches.iledefrance.fr'':plateforme|ACCEPTE_DEMANDE|''Professionnel -
Patrimoine'':bénéficiaire'
- '''Appel à projets'':événement|DESTINÉ_À|''GIP, copropriété, EPA'':bénéficiaire'
- source_sentence: 'Précision sure les bénéficiaires: Communes,Établissements publics
de coopération intercommunale (avec ou sans fiscalité propre),Établissements publics
territoriaux franciliens,Départements,Aménageurs publics et privés (lorsque ces
derniers interviennent à la demande ou pour le compte d''une collectivité précitée).'
sentences:
- '''Date de début'':concept|EST|''non précisée'':__inferred__'
- '''structure accompagnatrice'':organisation|AGRÉÉ_PAR|''État'':organisation'
- '''Aménageurs publics'':entité|INTERVIENT_POUR|''Établissements publics territoriaux
franciliens'':entité'
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-base
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: BinaryClassifEval
type: BinaryClassifEval
metrics:
- type: cosine_accuracy
value: 0.9666666666666667
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.2860155701637268
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.983050847457627
name: Cosine F1
- type: cosine_f1_threshold
value: 0.2860155701637268
name: Cosine F1 Threshold
- type: cosine_precision
value: 1.0
name: Cosine Precision
- type: cosine_recall
value: 0.9666666666666667
name: Cosine Recall
- type: cosine_ap
value: 1.0
name: Cosine Ap
- type: cosine_mcc
value: 0.0
name: Cosine Mcc
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision 835193815a3936a24a0ee7dc9e3d48c1fbb19c55 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lettria/just_test-trial-3")
# Run inference
sentences = [
"Précision sure les bénéficiaires: Communes,Établissements publics de coopération intercommunale (avec ou sans fiscalité propre),Établissements publics territoriaux franciliens,Départements,Aménageurs publics et privés (lorsque ces derniers interviennent à la demande ou pour le compte d'une collectivité précitée).",
"'Aménageurs publics':entité|INTERVIENT_POUR|'Établissements publics territoriaux franciliens':entité",
"'structure accompagnatrice':organisation|AGRÉÉ_PAR|'État':organisation",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `BinaryClassifEval`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:--------------------------|:--------|
| cosine_accuracy | 0.9667 |
| cosine_accuracy_threshold | 0.286 |
| cosine_f1 | 0.9831 |
| cosine_f1_threshold | 0.286 |
| cosine_precision | 1.0 |
| cosine_recall | 0.9667 |
| **cosine_ap** | **1.0** |
| cosine_mcc | 0.0 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 2,467 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 32 tokens</li><li>mean: 217.07 tokens</li><li>max: 383 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 29.37 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Type de project: L’excès de précipitations tout au long de l’année a conduit à une chute spectaculaire des rendements des céréales d’été et des protéagineux (blé, orge, pois, féverole, etc.) que produisent 90% des agriculteurs d’Île-de-France, historique grenier à blé du pays. Tributaires naturels du fleurissement des cultures, les apiculteurs professionnels de la région ont également souffert de ces dérèglements climatiques.La Région accompagne les exploitations concernées en leur apportant une aide exceptionnelle.</code> | <code>'excès de précipitations':phénomène|DIMINUE|'rendements des protéagineux':concept</code> | <code>1</code> |
| <code>Type de project: Dans le cadre de sa stratégie « Impact 2028 », la Région s’engage dans la défense de la souveraineté industrielle en renforçant son soutien à une industrie circulaire et décarbonée, porteuse d’innovations et créatrice d’emplois. PM'up Jeunes pousses industrielles soutient les projets d’implantation d’une première usine tournée vers la décarbonation, l’efficacité énergétique et la circularité des processus de production. Ces projets peuvent prendre l'une de ces formes : Une première unité de production industrielle, après une phase de prototypage,Une ligne pilote de production industrielle, en interne ou chez un tiers situé en Île-de-France, à condition que sa production soit destinée à de premières commercialisations,La transformation d’une unité de production pilote à une unité de production industrielle</code> | <code>'Région Île-de-France':organisation|soutient|'industrie décarbonée':concept</code> | <code>1</code> |
| <code>Procédures et démarches: Le dépôt des demandes de subvention se fait en ligne sur la plateforme régionale mesdemarches.iledefrance.fr : Session de dépôt unique pour les nouvelles demandes : du 30 septembre au 4 novembre 2024 (11 heures) pour des festivals qui se déroulent entre le 1er mars 2025 et le 28 février 2026 (vote à la CP de mars 2025). Pour les demandes de renouvellement, un mail est envoyé aux structures concernées par le service du Spectacle vivant en amont de chaque session de dépôt.<br>Bénéficiaires: Professionnel - Culture, Association - Fondation, Association - Régie par la loi de 1901, Association - ONG, Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou institution - Autre (GIP, copropriété, EPA...), Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab, Collectivité ou institution - Département, Collectivité ou institution - EPC...</code> | <code>'Collectivité ou institution - EPCI':bénéficiaire|PEUT_BÉNÉFICIER|'demandes de subvention':procédure</code> | <code>1</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 616 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 616 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 31 tokens</li><li>mean: 131.67 tokens</li><li>max: 330 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 29.63 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------|
| <code>Type de project: Le programme propose des rencontres le samedi après-midi dans une université ou une grande école réputée, entre les professionnels bénévoles et les lycéens et collégiens sous la forme d'atelier thématiques. Ces moments de rencontre touchent à une grande multitude de domaines d’activités. L'objectif est de donner l’opportunité aux jeunes les plus enclavés d’échanger avec des intervenants professionnels aux parcours atypiques et inspirants. Les intervenants suscitent les ambitions et élargissent les perspectives des élèves.</code> | <code>'rencontres':événement|impliquent|'professionnels bénévoles':groupe</code> | <code>1</code> |
| <code>Précision sure les bénéficiaires: Communes,Établissements publics de coopération intercommunale (avec ou sans fiscalité propre),Établissements publics territoriaux franciliens,Départements,Aménageurs publics et privés (lorsque ces derniers interviennent à la demande ou pour le compte d'une collectivité précitée).</code> | <code>'Aménageurs privés':entité|INTERVIENT_POUR|'Départements':entité</code> | <code>1</code> |
| <code>Date de début: non précisée<br>Date de fin (clôture): non précisée<br>Date de début de la future campagne: non précisée</code> | <code>'Date de fin':concept|EST|'non précisée':__inferred__</code> | <code>1</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 2
- `learning_rate`: 0.0003113016998934619
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_steps`: 274
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `hub_model_id`: Lettria/just_test-trial-3
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0003113016998934619
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 274
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: Lettria/just_test-trial-3
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | BinaryClassifEval_cosine_ap |
|:-------:|:------:|:-------------:|:---------------:|:---------------------------:|
| 1.0 | 4 | - | 0.6784 | 1.0 |
| 2.0 | 8 | - | 0.6264 | 1.0 |
| 3.0 | 12 | - | 0.5119 | 1.0 |
| 4.0 | 16 | - | 0.4031 | 1.0 |
| 5.0 | 20 | - | 0.3903 | 1.0 |
| 6.0 | 24 | - | 0.4197 | 1.0 |
| 7.0 | 28 | - | 0.3981 | 1.0 |
| **8.0** | **32** | **-** | **0.3694** | **1.0** |
| 9.0 | 36 | - | 0.4992 | 1.0 |
| 10.0 | 40 | 0.7366 | 0.3694 | 1.0 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.3.0
- Accelerate: 1.1.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
KingEmpire/Seraing_4 | KingEmpire | 2025-03-06T10:26:39Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-03-06T09:53:28Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Dannyar608/Text_summarizer | Dannyar608 | 2025-03-06T10:26:37Z | 0 | 0 | transformers | [
"transformers",
"code",
"Summarizer",
"BART",
"Gradio",
"Machine Learning",
"Natural Language Processing (NLP)",
"Deep Learning",
"Interactive Demo",
"Python",
"AI",
"summarization",
"en",
"dataset:abisee/cnn_dailymail",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"endpoints_compatible",
"region:us"
] | summarization | 2025-03-06T10:11:15Z | ---
datasets:
- abisee/cnn_dailymail
language:
- en
metrics:
- rouge
- bleu
- meteor
- ter
base_model:
- facebook/bart-large-cnn
pipeline_tag: summarization
library_name: transformers
tags:
- code
- Summarizer
- BART
- transformers
- Gradio
- Machine Learning
- Natural Language Processing (NLP)
- Deep Learning
- Interactive Demo
- Python
- AI
--- |
KingEmpire/Seraing_6 | KingEmpire | 2025-03-06T10:26:24Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-03-06T09:53:29Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/jais-family-1p3b-GGUF | mradermacher | 2025-03-06T10:25:41Z | 204 | 0 | transformers | [
"transformers",
"gguf",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"jais-family",
"ar",
"en",
"base_model:inceptionai/jais-family-1p3b",
"base_model:quantized:inceptionai/jais-family-1p3b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T02:02:37Z | ---
base_model: inceptionai/jais-family-1p3b
language:
- ar
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
- jais-family
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/inceptionai/jais-family-1p3b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/jais-family-1p3b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q4_K_S.gguf) | Q4_K_S | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q5_K_M.gguf) | Q5_K_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.Q8_0.gguf) | Q8_0 | 1.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/jais-family-1p3b-GGUF/resolve/main/jais-family-1p3b.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF | mradermacher | 2025-03-06T10:25:40Z | 35 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"dataset:gghfez/long-cot-4k",
"base_model:gghfez/Mistral-Large-2407-LongCoT",
"base_model:quantized:gghfez/Mistral-Large-2407-LongCoT",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-05T14:27:45Z | ---
base_model: gghfez/Mistral-Large-2407-LongCoT
datasets:
- gghfez/long-cot-4k
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/gghfez/Mistral-Large-2407-LongCoT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ1_S.gguf) | i1-IQ1_S | 26.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ1_M.gguf) | i1-IQ1_M | 28.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ2_S.gguf) | i1-IQ2_S | 38.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 41.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ2_M.gguf) | i1-IQ2_M | 41.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q2_K.gguf) | i1-Q2_K | 45.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.2 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.9 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 53.1 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.4 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 59.2 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 64.7 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.5 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 69.4 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 76.8 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 84.5 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.6 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF/resolve/main/Mistral-Large-2407-LongCoT.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 100.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jscryptofund/sasha | jscryptofund | 2025-03-06T10:24:04Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-03-06T09:42:02Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
TongZheng1999/gemma-2-9b-it-star-code-v3_reasoning_10-3Rounds-iter-3 | TongZheng1999 | 2025-03-06T10:22:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T09:19:25Z | ---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-2-9b-it-star-code-v3_reasoning_10-3Rounds-iter-3
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for gemma-2-9b-it-star-code-v3_reasoning_10-3Rounds-iter-3
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TongZheng1999/gemma-2-9b-it-star-code-v3_reasoning_10-3Rounds-iter-3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/fuby9gyt)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/jais-adapted-7b-chat-GGUF | mradermacher | 2025-03-06T10:18:27Z | 184 | 0 | transformers | [
"transformers",
"gguf",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"jais-family",
"ar",
"en",
"base_model:inceptionai/jais-adapted-7b-chat",
"base_model:quantized:inceptionai/jais-adapted-7b-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-06T01:55:33Z | ---
base_model: inceptionai/jais-adapted-7b-chat
language:
- ar
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
- jais-family
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/inceptionai/jais-adapted-7b-chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/jais-adapted-7b-chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.Q8_0.gguf) | Q8_0 | 7.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/jais-adapted-7b-chat-GGUF/resolve/main/jais-adapted-7b-chat.f16.gguf) | f16 | 14.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Hachipo/Llama3-8B_MIFT-En_opencoder-edu_EnTrans_1000 | Hachipo | 2025-03-06T10:17:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T10:13:32Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
texanrangee/446d83b8-576b-41f8-aacd-6fabdf825e9a | texanrangee | 2025-03-06T10:15:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T04:29:58Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
visdata/wld14 | visdata | 2025-03-06T10:15:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-06T10:09:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
csikasote/mms-300m-librispeech-adapter-model | csikasote | 2025-03-06T10:13:58Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"libri10h",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-300m",
"base_model:finetune:facebook/mms-300m",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-02-22T16:40:25Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-300m
tags:
- automatic-speech-recognition
- libri10h
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-300m-librispeech-adapter-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-300m-librispeech-adapter-model
This model is a fine-tuned version of [facebook/mms-300m](https://huggingface.co/facebook/mms-300m) on the LIBRI10H - ENG dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8895
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:---:|
| 6.0414 | 0.2899 | 100 | 2.9197 | 1.0 |
| 2.8561 | 0.5797 | 200 | 2.8877 | 1.0 |
| 2.8552 | 0.8696 | 300 | 2.8849 | 1.0 |
| 2.842 | 1.1594 | 400 | 2.9165 | 1.0 |
| 2.8444 | 1.4493 | 500 | 2.8822 | 1.0 |
| 2.8381 | 1.7391 | 600 | 2.8861 | 1.0 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
ig1/r1-1776-AWQ | ig1 | 2025-03-06T10:13:29Z | 0 | 0 | null | [
"safetensors",
"deepseek_v3",
"custom_code",
"base_model:perplexity-ai/r1-1776",
"base_model:quantized:perplexity-ai/r1-1776",
"license:mit",
"4-bit",
"awq",
"region:us"
] | null | 2025-03-06T09:34:23Z | ---
license: mit
base_model:
- perplexity-ai/r1-1776
---
Quantized with `AutoAWQ` `v0.2.8` and `transformers` `v4.49.0`.
Example run:
```bash
docker run --rm --runtime nvidia --gpus 'all' --ipc=host -e VLLM_WORKER_MULTIPROC_METHOD=spawn -e 'HF_TOKEN' -v '/root/.cache/huggingface:/root/.cache/huggingface' -p 127.0.0.1:8000:8000 "vllm/vllm-openai:v0.7.3" --tensor-parallel-size 4 --enable-chunked-prefill=False --enable-reasoning --reasoning-parser deepseek_r1 --model 'ig1/r1-1776-AWQ' --trust-remote-code --dtype half --served-model-name "R1 1776" --max-model-len 65536 --override-generation-config '{"temperature":0.6,"top_p":0.95}'
``` |
Danaasa/llama-prestigeAI | Danaasa | 2025-03-06T10:13:00Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T10:13:00Z | ---
license: apache-2.0
---
|
original-videostv/Leaked-Sapna-Shah.Video.Viral.Original.Leaked.Video.Trending.Now | original-videostv | 2025-03-06T10:12:55Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-06T10:03:54Z | 📺📱👉◄◄🔴 https://tinyurl.com/2v8ku3rk
📺📱👉◄◄🔴 https://tinyurl.com/2v8ku3rk
📺📱👉◄◄🔴 https://tinyurl.com/2v8ku3rk
Sophie Rain Net Worth: OnlyFans Star's 2025 Earnings Leaked
OnlyFans star Sophie Rain has earned over $50 million across platforms and aims for $100 million soon. In November 2024, the 20-year-old went viral after revealing she had made over $43 million from OnlyFans. Her rapid financial success continues to draw significant attention online.
OnlyFans star Sophie Rain has revealed that she has made more than $50 million from her digital presence across platforms, and is on her way towards the $100 million mark, something she hopes to achieve soon. Back in November 2024, 20-year-old Rain had taken social media by storm after revealing that she had made more than $43 million from OnlyFans, the adult content-sharing platform.
OnlyFans star Sophie Rain has earned over $50 million across platforms and aims for $100 million soon. In November 2024, the 20-year-old went viral after revealing she had made over $43 million from OnlyFans. Her rapid financial success continues to draw significant attention online.
SOPHIE RAIN LEAKS HER EARNINGS pic.twitter.com/evNT7GYfcM
— DramaAlert (@DramaAlert) February 26, 2025
Rain, who is from Florida, has reportedly earned over $43 million in her first year on the platform OnlyFans. Raised in a devout Christian household, Rain maintains that her work does not conflict with her religious beliefs, stating, "The Lord's very forgiving." |
rikeshsilwalekg/latest-unsloth-llama-3.2-1b-instruct | rikeshsilwalekg | 2025-03-06T10:11:54Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-06T10:01:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
s-nlp/mt0-xl-detox-mpd | s-nlp | 2025-03-06T10:11:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"detoxification",
"text_style_transfer",
"de",
"es",
"fr",
"ru",
"dataset:textdetox/multilingual_paradetox",
"arxiv:2502.06394",
"base_model:bigscience/mt0-xl",
"base_model:finetune:bigscience/mt0-xl",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-03-06T09:34:30Z | ---
library_name: transformers
tags:
- detoxification
- text_style_transfer
license: openrail++
datasets:
- textdetox/multilingual_paradetox
language:
- de
- es
- fr
- ru
base_model:
- bigscience/mt0-xl
pipeline_tag: text2text-generation
---
# mT0-XL (SynthDetoxM Full)

<!-- Provide a quick summary of what the model is/does. -->
This a fine-tune of [`bigscience/mt0-xl`](https://huggingface.co/bigscience/mt0-xl) model on multilingual text detoxification dataset [MultiParaDetox](https://huggingface.co/datasets/textdetox/multilingual_paradetox) from the NAACL 2025 Main Track paper *SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators* by Daniil Moskovskiy et al.
## Usage
The usage is similar to the
```python
from transformers import pipeline
toxic_text = "Your toxic text goes here."
pipe = pipeline("text2text-generation", model="s-nlp/mt0-xl-detox-mpd")
pipe(f"Detoxify: {toxic_text}")
```
## Training Details
The model was fine-tuned for 2 epochs on [`textdetox/multilingual_paradetox`](https://huggingface.co/datasets/textdetox/multilingual_paradetox) dataset with full precision (FP32) using Adafactor optimizer with `1e-4` learning rate and batch size of `4` with gradient checkpointing enabled. The full training configuration is available below:
```json
{
"do_train": true,
"do_eval": true,
"per_device_train_batch_size": 4,
"per_device_eval_batch_size": 4,
"learning_rate": 1e-4,
"weight_decay": 0,
"num_train_epochs": 2,
"gradient_accumulation_steps": 1,
"logging_strategy": "steps",
"logging_steps": 1,
"save_strategy": "epoch",
"save_total_limit": 1,
"warmup_steps": 1,
"report_to": "wandb",
"optim": "adafactor",
"lr_scheduler_type": "linear",
"predict_with_generate": true,
"bf16": false,
"gradient_checkpointing": true,
"output_dir": "/path/",
"seed": 42,
}
```
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
We use the multilingual detoxification evaluation setup from [TextDetox 2024 Multilingual Text Detoxification Shared Task](https://pan.webis.de/clef24/pan24-web/text-detoxification.html).
Specifically, we use the following metrics:
- **Style Transfer Accuracy** (**STA**) is calculated with a [`textdetox/xlmr-large-toxicity-classifier`](https://huggingface.co/textdetox/xlmr-large-toxicity-classifier).
- **Text Similarity** (**SIM**) is calculated as a similarity of text embeddings given by a [`sentence-transformers/LaBSE`](https://huggingface.co/sentence-transformers/LaBSE) encoder.
- **Fluency** (**FL**) is calculated as a character n-gram F score - [$\text{ChrF}_1$](https://github.com/m-popovic/chrF).
These metrics are aggregated in a final **Joint** metric (**J**):
$$\textbf{J} = \frac{1}{n}\sum\limits_{i=1}^{n}\textbf{STA}(y_i) \cdot \textbf{SIM}(x_i,y_i) \cdot \textbf{FL}(x_i, y_i)$$
### Evaluation Results
This model was evaluated on the test set of [`textdetox/multilingual_paradetox`](https://huggingface.co/datasets/textdetox/multilingual_paradetox) dataset from [TextDetox 2024 Multilingual Text Detoxification Shared Task](https://pan.webis.de/clef24/pan24-web/text-detoxification.html).
The results of the evaluation are presented below.
| | **German** | **Spanish** | **Russian** |
|----------------|------------|-------------|-------------|
| **Human References** | 0.733 | 0.709 | 0.732 |
| **Baselines** | | | |
| Duplicate | 0.287 | 0.090 | 0.048 |
| Delete | 0.362 | 0.319 | 0.255 |
| Backtranslation| 0.233 | 0.275 | 0.223 |
| **mT0-XL supervised fine-tuning** | | | |
| [MultiParaDetox](https://huggingface.co/datasets/textdetox/multilingual_paradetox) (this model) | 0.446 | 0.344 | 0.472 |
| [SynthDetoxM](https://huggingface.co/datasets/s-nlp/synthdetoxm) (Subset AVG) | 0.460 | 0.402 | 0.475 |
| [SynthDetoxM](https://huggingface.co/datasets/s-nlp/synthdetoxm) [`s-nlp/mt0-xl-detox-sdm-full`](https://huggingface.co/s-nlp/mt0-xl-detox-sdm-full) | **0.482** | **0.470** | **0.546** |
#### Software
Code for replicating the results from the paper can be found on [GitHub](https://github.com/s-nlp/synthdetoxm).
## Citation
**BibTeX:**
```latex
@misc{moskovskiy2025synthdetoxmmodernllmsfewshot,
title={SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators},
author={Daniil Moskovskiy and Nikita Sushko and Sergey Pletenev and Elena Tutubalina and Alexander Panchenko},
year={2025},
eprint={2502.06394},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.06394},
}
```
## License
This model is licensed under the OpenRAIL++ License, which supports the development of various technologies—both industrial and academic—that serve the public good.
## Model Card Authors
[Daniil Moskovskiy](https://huggingface.co/etomoscow)
## Model Card Contact
For any questions, please contact: [Daniil Moskovskiy]([email protected]) |
mradermacher/Mistral-Large-2407-LongCoT-GGUF | mradermacher | 2025-03-06T10:10:50Z | 5 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"dataset:gghfez/long-cot-4k",
"base_model:gghfez/Mistral-Large-2407-LongCoT",
"base_model:quantized:gghfez/Mistral-Large-2407-LongCoT",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T09:39:17Z | ---
base_model: gghfez/Mistral-Large-2407-LongCoT
datasets:
- gghfez/long-cot-4k
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/gghfez/Mistral-Large-2407-LongCoT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q2_K.gguf) | Q2_K | 45.3 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q3_K_S.gguf.part2of2) | Q3_K_S | 52.9 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q3_K_M.gguf.part2of2) | Q3_K_M | 59.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q3_K_L.gguf.part2of2) | Q3_K_L | 64.7 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.IQ4_XS.gguf.part2of2) | IQ4_XS | 66.1 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q4_K_S.gguf.part2of2) | Q4_K_S | 69.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q4_K_M.gguf.part2of2) | Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q5_K_S.gguf.part2of2) | Q5_K_S | 84.5 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q5_K_M.gguf.part2of2) | Q5_K_M | 86.6 | |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q6_K.gguf.part3of3) | Q6_K | 100.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Mistral-Large-2407-LongCoT-GGUF/resolve/main/Mistral-Large-2407-LongCoT.Q8_0.gguf.part3of3) | Q8_0 | 130.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits