modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
PhoenixB/eba6291e-49bc-456f-8f57-65514dab3ea1 | PhoenixB | 2025-04-25T11:51:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-25T11:47:02Z | ---
base_model: unsloth/mistral-7b-instruct-v0.2
library_name: transformers
model_name: eba6291e-49bc-456f-8f57-65514dab3ea1
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for eba6291e-49bc-456f-8f57-65514dab3ea1
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="PhoenixB/eba6291e-49bc-456f-8f57-65514dab3ea1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients-On-Demand/runs/6a0uljxj)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Daehoya/HyperCLOVAX-SEED-Counseling | Daehoya | 2025-04-25T11:42:12Z | 0 | 0 | null | [
"safetensors",
"llama",
"counseling",
"korean",
"chat",
"empathy",
"text-generation",
"conversational",
"ko",
"dataset:custom",
"license:other",
"region:us"
] | text-generation | 2025-04-25T08:59:27Z | ---
language: ko
tags:
- counseling
- korean
- chat
- empathy
license: other
datasets:
- custom
pipeline_tag: text-generation
---
# 🧠 HyperCLOVAX-SEED-Counseling
This model is a fine-tuned version of [`naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B`](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B), specialized for **empathetic counseling for teenagers**.
---
## 🔍 Model Overview
- **Base model**: `naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B`
- **Fine-tuning objective**: Provide warm, non-judgmental, and emotionally supportive counseling responses tailored to youth clients.
- **Language**: Korean (한국어)
The model has been trained on real and synthetic conversations between counselors and teenage clients. It emphasizes:
- Empathy and emotional validation (e.g., "그랬구나", "충분히 이해돼")
- Open-ended questions for self-exploration
- Avoiding direct advice or judgment
- Handling crisis situations with safe referrals
---
## 🧑⚕️ System Prompt Guideline
The system message used during training and inference is:
```
당신은 공감 능력이 뛰어난 전문 청소년 상담사입니다.
(중략: 따뜻하고 공감적인 상담 대화 규칙 포함)
```
This ensures the assistant maintains a friendly, safe, and supportive tone.
---
## 💬 Inference Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Daehoya/HyperCLOVAX-SEED-Counseling")
tokenizer = AutoTokenizer.from_pretrained("Daehoya/HyperCLOVAX-SEED-Counseling")
prompt = "요즘 친구들과 멀어진 것 같아..."
inputs = tokenizer.apply_chat_template(
[
{"role": "system", "content": "당신은 공감 능력이 뛰어난 전문 청소년 상담사입니다."},
{"role": "user", "content": prompt}
],
return_tensors="pt"
).to(model.device)
output = model.generate(inputs, max_new_tokens=300, temperature=0.7)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
---
## 🧪 Training Details
- **Optimizer**: AdamW
- **Batch size**: 3 per device (gradient_accumulation_steps=20)
- **Epochs**: 3
- **Max input length**: up to 8192 tokens
- **Hardware**: 4×A100 GPUs
- **Precision**: FP16
- **Framework**: `transformers.Trainer`
---
## 📁 Files Included
- `pytorch_model.bin` or `model-*.safetensors`: Model weights
- `tokenizer.json`, `tokenizer_config.json`: Tokenizer files
- `config.json`: Model config
- `generation_config.json`: Sampling configuration
- `README.md`: This file
---
## 📜 License
This model is released under the same license as the base model. Please review [NAVER CLOVA's licensing policy](https://huggingface.co/naver-hyperclovax).
---
## 🙏 Acknowledgements
Thanks to NAVER CLOVA for the base model and the community for ongoing contributions in mental health AI.
|
dgambettaphd/M_llm3_gen6_run0_WXS_doc1000_synt64_tot128_MPP | dgambettaphd | 2025-04-25T11:35:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T11:35:05Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shubhamprshr/Qwen2.5-1.5B-Instruct_blocksworld1246_sgrpo_classic_0.5_0.5_True_300 | shubhamprshr | 2025-04-25T10:42:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:blocksworld-dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T04:02:53Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct_blocksworld1246_sgrpo_classic_0.5_0.5_True_300
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct_blocksworld1246_sgrpo_classic_0.5_0.5_True_300
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-1.5B-Instruct_blocksworld1246_sgrpo_classic_0.5_0.5_True_300", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/BW2/runs/jx4znt38)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dzanbek/8bca9373-ecad-4fea-8ec6-4538ae12eebc | dzanbek | 2025-04-25T10:38:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-32k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-32k",
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T09:33:00Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-32k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8bca9373-ecad-4fea-8ec6-4538ae12eebc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-32k
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 338a122e43543931_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/338a122e43543931_train_data.json
type:
field_input: raw_texts
field_instruction: gen_questions
field_output: Positive
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/8bca9373-ecad-4fea-8ec6-4538ae12eebc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/338a122e43543931_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3a6c2313-e0d4-427f-b835-4522b7af6bde
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 3a6c2313-e0d4-427f-b835-4522b7af6bde
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8bca9373-ecad-4fea-8ec6-4538ae12eebc
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0599 | 0.0104 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/llm-embedder-i1-GGUF | mradermacher | 2025-04-25T10:29:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:BAAI/llm-embedder",
"base_model:quantized:BAAI/llm-embedder",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"feature-extraction"
] | null | 2025-04-25T10:25:09Z | ---
base_model: BAAI/llm-embedder
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/BAAI/llm-embedder
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llm-embedder-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/llm-embedder-i1-GGUF/resolve/main/llm-embedder.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Bhumi-Ahir-Viral-Video/Original-Viral-Link.Bhumi.Ahir.Viral.Video.Leaks.official.HD | Bhumi-Ahir-Viral-Video | 2025-04-25T10:03:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-25T10:02:20Z |
<a href="https://sdu.sk/9Ip"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
mlfoundations-dev/b2_math_fasttext_pos_s1k_reformat_neg_lap1official_math_10k | mlfoundations-dev | 2025-04-25T09:57:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T23:38:27Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_math_fasttext_pos_s1k_reformat_neg_lap1official_math_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_math_fasttext_pos_s1k_reformat_neg_lap1official_math_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_math_fasttext_pos_s1k_reformat_neg_lap1official_math_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ansh6/crypto | ansh6 | 2025-04-25T09:51:14Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-25T09:51:13Z | ---
license: bigcode-openrail-m
---
|
Kam1qwe/Kam1lka | Kam1qwe | 2025-04-25T09:46:51Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2025-04-25T09:46:51Z | ---
license: artistic-2.0
---
|
Ktejaswi/AI_Powered_Dream_Interpreter | Ktejaswi | 2025-04-25T09:36:18Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T09:36:18Z | ---
license: apache-2.0
---
|
inclusionAI/Ring-lite-linear-preview | inclusionAI | 2025-04-25T09:32:32Z | 3 | 8 | null | [
"safetensors",
"bailing_moe_linear",
"text-generation",
"conversational",
"custom_code",
"zh",
"en",
"base_model:inclusionAI/Ling-lite",
"base_model:finetune:inclusionAI/Ling-lite",
"license:mit",
"region:us"
] | text-generation | 2025-04-24T02:58:46Z | ---
license: mit
language:
- zh
- en
base_model:
- inclusionAI/Ling-lite
pipeline_tag: text-generation
---
# Ring-lite-linear-preview
<p align="center">
<img src="https://huggingface.co/inclusionAI/Ring-lite-linear-preview/resolve/main/ant-bailing.png" width="100"/>
<p>
<p align="center">
🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
<p>
## Introduction
Ring-lite-linear-preview is a hybrid-linear MoE LLM provided and open-sourced by InclusionAI, which has 17.1B parameters with 3.0B activated parameters. It is a long reasoning model based on hybrid-linear attention, achieving near-linear computational complexity and near-constant space complexity during inference. This model was converted from [Ling-lite-0220](https://huggingface.co/inclusionAI/Ling-lite/tree/Ling-lite-0220), which adopts the softmax attention-based architecture. It matches the performance of DeepSeek-R1-Distill-Qwen-7B on standardized reasoning benchmarks while substantially reducing computational overhead in both training and inference phases. In certain generation speed tests based on vLLM, we observed that the throughput was more than doubled compared to softmax attention models of the same scale (e.g., Ling-lite). To the best of our knowledge, it is the first open-source hybrid-linear reasoning language model.
## Model Downloads
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
| Ring-lite-linear-preview | 17.1B | 3.0B | 64K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-lite-linear-preview)|
</div>
## Evaluation
In terms of the evaluation of reasoning ability, Ring-lite-linear-preview achieves 55.0 on AIME24 and 93.8 on MATH-500.
<div align="center">
| **Model** | **AIME24** | **MATH-500** | **GPQA-diamond** | **LiveCodeBench** |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
| DeepSeek-R1-Distill-Qwen-7B (reported) | 55.5 | 92.8 | 49.1 | 37.6 |
| DeepSeek-R1-Distill-Qwen-7B (reproduce) | 53.2 | 93.7 | 50.4 | 36.5 |
| Ring-lite-distill-preview-Stage-1 | 54.2 | 93.5 | 47.5 | 32.9 |
| Ring-lite-linear-preview | 55.0 | 93.8 | 46.5 | 29.8 |
</div>
## Inference Speed
To evaluate the generation throughput, we deploy Ring-lite-linear and the softmax-attention-based Ring-lite based on vLLM on a single NVIDIA A100 GPU. We conduct two sets of experiments:
1. **Long Input Evaluation**: We measure the time-to-first-token (TTFT) with varying input sequence lengths (from 512 to 384k tokens) using batch size 1 and TP=1. As shown in the top figure, at 384k input length, Ring-lite-linear achieves 3.5× faster TTFT compared to the softmax-attention-based model.
2. **Long Output Evaluation**: We fix the input sequence length to 1 and measure the end-to-end (E2E) generation time required for generating output sequences of varying lengths (from 512 to 32k tokens) with batch size 64 and TP=1. As illustrated in the bottom figure, at 32k output length, Ring-lite-linear achieves 2.2× throughput of the softmax-attention-based Ring-lite.
These results demonstrate that our hybrid linear attention mechanism significantly improves both input processing efficiency and generation throughput, especially for long context scenarios.
<p align="center">
<img src="https://huggingface.co/inclusionAI/Ring-lite-linear-preview/resolve/main/throughput.png" width="600"/>
<p>
Additionally, to illustrate the advantage in inference speed, we present a comparison between Ring-lite-linear-preview and softmax-attention-based Ring-lite under a batch size of 64 and an output length of 16k (60x speedup). It can be observed that the KV cache usage of Ring-lite-linear-preview is nearly 1/6 that of Ring-lite, and the E2E time is reduced by 27.24% compared with Ring-lite.
<p align="center">
<img src="https://huggingface.co/inclusionAI/Ring-lite-linear-preview/resolve/main/inference_speed.gif" width="600"/>
<p>
More details will be reported in our technical report [TBD]
## Requirements
- [transformers](https://github.com/huggingface/transformers) >= 4.48.3
- [flash-linear-attention](https://github.com/fla-org/flash-linear-attention) >= 0.2.1
## Quickstart
Here is a code snippet to show you how to use the chat model with `modelscope`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ring-lite-linear-preview"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ring, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Deployment
Please refer to [Github](https://github.com/inclusionAI/Ring/tree/main/hybrid_linear)
## Dataset
The long reasoning sft data: [Ring-lite-distill-preview-sft-data](https://huggingface.co/datasets/inclusionAI/Ring-lite-distill-preview-sft-data)
## License
This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-linear-preview/blob/main/LICENSE).
## Citation
[TBD]
|
efficientscaling/Z1-Longest-7B | efficientscaling | 2025-04-25T09:11:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T09:10:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danyush/qwen2.5_vl_7b_virat_shuffled | danyush | 2025-04-25T09:10:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-25T09:04:57Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vmpsergio/9463d9df-44d0-4b5b-a588-fc9f202b7e1d | vmpsergio | 2025-04-25T08:57:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"region:us"
] | null | 2025-04-25T08:52:27Z | ---
library_name: peft
license: other
base_model: facebook/opt-350m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9463d9df-44d0-4b5b-a588-fc9f202b7e1d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: facebook/opt-350m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 32cb49683e226f4d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/32cb49683e226f4d_train_data.json
type:
field_input: author
field_instruction: dynasty
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/9463d9df-44d0-4b5b-a588-fc9f202b7e1d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/32cb49683e226f4d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a4199bed-2854-4046-9e07-45f55e8274f5
wandb_project: s56-2
wandb_run: your_name
wandb_runid: a4199bed-2854-4046-9e07-45f55e8274f5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9463d9df-44d0-4b5b-a588-fc9f202b7e1d
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.3525 | 0.0078 | 200 | 3.3061 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
annasoli/Qwen2.5-14B-Instruct_bad_med_dpR1_15-17_21-23_27-29 | annasoli | 2025-04-25T08:37:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T08:37:23Z | ---
base_model: unsloth/Qwen2.5-14B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** annasoli
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LINK-Sophie-Rain-Spiderman-Viral-Videos/Original.Sophie.Rain.Spiderman.Video.Leaks.official | LINK-Sophie-Rain-Spiderman-Viral-Videos | 2025-04-25T06:22:41Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-25T06:22:21Z | <animated-image data-catalyst=""><a href="https://sexleakedviral.com/sophie-rain-spiderman/?sophie-rain-spiderman-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf | RichardErkhov | 2025-04-25T05:50:07Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-25T04:27:46Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
_ra_04_checkpoint-456 - GGUF
- Model creator: https://huggingface.co/minhhien0811/
- Original model: https://huggingface.co/minhhien0811/_ra_04_checkpoint-456/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [_ra_04_checkpoint-456.Q2_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q2_K.gguf) | Q2_K | 2.81GB |
| [_ra_04_checkpoint-456.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [_ra_04_checkpoint-456.IQ3_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [_ra_04_checkpoint-456.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [_ra_04_checkpoint-456.IQ3_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [_ra_04_checkpoint-456.Q3_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q3_K.gguf) | Q3_K | 3.55GB |
| [_ra_04_checkpoint-456.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [_ra_04_checkpoint-456.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [_ra_04_checkpoint-456.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [_ra_04_checkpoint-456.Q4_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q4_0.gguf) | Q4_0 | 4.13GB |
| [_ra_04_checkpoint-456.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [_ra_04_checkpoint-456.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [_ra_04_checkpoint-456.Q4_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q4_K.gguf) | Q4_K | 4.36GB |
| [_ra_04_checkpoint-456.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [_ra_04_checkpoint-456.Q4_1.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q4_1.gguf) | Q4_1 | 4.54GB |
| [_ra_04_checkpoint-456.Q5_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q5_0.gguf) | Q5_0 | 4.95GB |
| [_ra_04_checkpoint-456.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [_ra_04_checkpoint-456.Q5_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q5_K.gguf) | Q5_K | 5.07GB |
| [_ra_04_checkpoint-456.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [_ra_04_checkpoint-456.Q5_1.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q5_1.gguf) | Q5_1 | 5.36GB |
| [_ra_04_checkpoint-456.Q6_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q6_K.gguf) | Q6_K | 5.82GB |
| [_ra_04_checkpoint-456.Q8_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-__ra_04_checkpoint-456-gguf/blob/main/_ra_04_checkpoint-456.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf | RichardErkhov | 2025-04-25T05:46:48Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-25T04:26:20Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
only-nvidia_9966 - GGUF
- Model creator: https://huggingface.co/minhhien0811/
- Original model: https://huggingface.co/minhhien0811/only-nvidia_9966/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [only-nvidia_9966.Q2_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q2_K.gguf) | Q2_K | 2.81GB |
| [only-nvidia_9966.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [only-nvidia_9966.IQ3_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [only-nvidia_9966.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [only-nvidia_9966.IQ3_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [only-nvidia_9966.Q3_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q3_K.gguf) | Q3_K | 3.55GB |
| [only-nvidia_9966.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [only-nvidia_9966.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [only-nvidia_9966.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [only-nvidia_9966.Q4_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q4_0.gguf) | Q4_0 | 4.13GB |
| [only-nvidia_9966.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [only-nvidia_9966.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [only-nvidia_9966.Q4_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q4_K.gguf) | Q4_K | 4.36GB |
| [only-nvidia_9966.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [only-nvidia_9966.Q4_1.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q4_1.gguf) | Q4_1 | 4.54GB |
| [only-nvidia_9966.Q5_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q5_0.gguf) | Q5_0 | 4.95GB |
| [only-nvidia_9966.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [only-nvidia_9966.Q5_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q5_K.gguf) | Q5_K | 5.07GB |
| [only-nvidia_9966.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [only-nvidia_9966.Q5_1.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q5_1.gguf) | Q5_1 | 5.36GB |
| [only-nvidia_9966.Q6_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q6_K.gguf) | Q6_K | 5.82GB |
| [only-nvidia_9966.Q8_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_only-nvidia_9966-gguf/blob/main/only-nvidia_9966.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shisa-ai/ablation-182-a177.finaldpo2.6.5e7-shisa-v2-mistral-nemo-12b | shisa-ai | 2025-04-25T04:50:49Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-10T20:43:48Z | ---
library_name: transformers
model_name: outputs/ablation-182-a177.finaldpo2.6.5e7-shisa-v2-mistral-nemo-12b
tags:
- generated_from_trainer
licence: license
---
# Model Card for outputs/ablation-182-a177.finaldpo2.6.5e7-shisa-v2-mistral-nemo-12b
This model is a fine-tuned version of [/fsx2/outputs/ablation-177-finalsft2-shisa-v2-mistral-nemo-12b](https://huggingface.co//fsx2/outputs/ablation-177-finalsft2-shisa-v2-mistral-nemo-12b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/augmxnt/shisa-v2/runs/pvui2il9)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.1
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
KingEmpire/sn9_pretcom4_2504_1 | KingEmpire | 2025-04-25T04:50:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T04:37:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/b2_science_length_gpt4omini_3k | mlfoundations-dev | 2025-04-25T04:25:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T00:37:41Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_length_gpt4omini_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_length_gpt4omini_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_length_gpt4omini_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
striverweb/ppo-LunarLander-v2 | striverweb | 2025-04-25T04:21:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-25T03:45:06Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.79 +/- 16.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf | RichardErkhov | 2025-04-25T04:14:18Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-25T02:38:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deita-arena-nvidia-sft-rag-17500 - GGUF
- Model creator: https://huggingface.co/minhhien0811/
- Original model: https://huggingface.co/minhhien0811/deita-arena-nvidia-sft-rag-17500/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [deita-arena-nvidia-sft-rag-17500.Q2_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q2_K.gguf) | Q2_K | 2.81GB |
| [deita-arena-nvidia-sft-rag-17500.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [deita-arena-nvidia-sft-rag-17500.IQ3_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [deita-arena-nvidia-sft-rag-17500.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [deita-arena-nvidia-sft-rag-17500.IQ3_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [deita-arena-nvidia-sft-rag-17500.Q3_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q3_K.gguf) | Q3_K | 3.55GB |
| [deita-arena-nvidia-sft-rag-17500.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [deita-arena-nvidia-sft-rag-17500.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [deita-arena-nvidia-sft-rag-17500.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_0.gguf) | Q4_0 | 4.13GB |
| [deita-arena-nvidia-sft-rag-17500.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_K.gguf) | Q4_K | 4.36GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_1.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_1.gguf) | Q4_1 | 4.54GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_0.gguf) | Q5_0 | 4.95GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_K.gguf) | Q5_K | 5.07GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_1.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_1.gguf) | Q5_1 | 5.36GB |
| [deita-arena-nvidia-sft-rag-17500.Q6_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q6_K.gguf) | Q6_K | 5.82GB |
| [deita-arena-nvidia-sft-rag-17500.Q8_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stain195/claassification938104 | stain195 | 2025-04-25T03:27:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-25T03:27:46Z | ---
license: apache-2.0
---
|
aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF | aisingapore | 2025-04-25T03:21:52Z | 1,205 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"zh",
"vi",
"id",
"th",
"fil",
"ta",
"ms",
"km",
"lo",
"my",
"jv",
"su",
"arxiv:2504.05747",
"base_model:aisingapore/Llama-SEA-LION-v3-70B-IT",
"base_model:quantized:aisingapore/Llama-SEA-LION-v3-70B-IT",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-12-16T03:01:34Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- aisingapore/Llama-SEA-LION-v3-70B-IT
base_model_relation: quantized
language:
- en
- zh
- vi
- id
- th
- fil
- ta
- ms
- km
- lo
- my
- jv
- su
license: llama3.1
---
<div>
<img src="llama_3.1_70b_sea-lion_v3_gguf_banner.png"/>
</div>
# Llama-SEA-LION-v3-70B-IT
[SEA-LION](https://arxiv.org/abs/2504.05747) is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama-SEA-LION-v3-70B-IT is a multilingual model that has been fine-tuned in two stages on approximately **12.3M English instruction-completion pairs** alongside a pool of **4.5M Southeast Asian instruction-completion pairs** from SEA languages such as Indonesian, Javanese, Sundanese, Tamil, Thai, and Vietnamese.
SEA-LION stands for _Southeast Asian Languages In One Network_.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Javanese, Khmer, Lao, Malay, Sundanese, Tamil, Thai, Vietnamese
- **License:** [Llama 3.1 Community License](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE)
## Description
This repo contains `GGUF` format model files for [aisingapore/Llama-SEA-LION-v3-70B-IT](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT).
#### Model Weights Included in this repository:
- [Llama-SEA-LION-v3-70B-IT-F16](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-F16-00001-of-00008.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q2_K](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q2_K.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q3_K_M](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q3_K_M.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q4_0](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q4_0.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q4_K_M](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q4_K_M.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q5_0](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q5_0.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q5_K_M](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q5_K_M.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q6_K](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q6_K-00001-of-00002.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q8_0](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q8_0-00001-of-00003.gguf)
> [!NOTE]
> Take note that some GGUFs are split into parts. Most tools such as [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and those built on it do support split GGUFs, pointing the platform to the first split will be sufficient for it to function.
> In the event where a merge is necessary, it can be done using `llama.cpp`'s `gguf-split`: `./gguf-split --merge ./path/to/first-split ./path/to/output-gguf`
> More details: [gguf-split guide](https://github.com/ggerganov/llama.cpp/discussions/6404) & [README](https://github.com/ggerganov/llama.cpp/tree/master/examples/gguf-split)
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
Llama-SEA-LION-v3-70B-IT was tuned using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 3200 GPU hours, on a single node of 8x H100-80GB GPUs.
## Data
Llama-SEA-LION-v3-70B-IT was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Venkatadri Hulagadri Adithya, Yeo Yeow Tong, Yong Xianbin
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes. |
3mily1u/fim-codegen-350m-mono-finetuned-attack-25 | 3mily1u | 2025-04-25T03:17:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-25T03:16:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/d0f62694-a9a9-41ee-92fa-739328b8e778 | sergioalves | 2025-04-25T03:15:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
] | null | 2025-04-25T03:04:05Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0f62694-a9a9-41ee-92fa-739328b8e778
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7d91f3d76c4fac08_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7d91f3d76c4fac08_train_data.json
type:
field_instruction: user_status
field_output: user_persona
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/d0f62694-a9a9-41ee-92fa-739328b8e778
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/7d91f3d76c4fac08_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fac4d056-b459-4431-8129-825726a73dfd
wandb_project: s56-8
wandb_run: your_name
wandb_runid: fac4d056-b459-4431-8129-825726a73dfd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d0f62694-a9a9-41ee-92fa-739328b8e778
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8417 | 0.1411 | 200 | 0.8397 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Echo9Zulu/Qwen2.5-VL-7B-Instruct-int4_sym-ov | Echo9Zulu | 2025-04-25T00:29:57Z | 0 | 0 | null | [
"openvino",
"qwen2_5_vl",
"license:apache-2.0",
"region:us"
] | null | 2025-04-24T23:47:06Z | ---
license: apache-2.0
---
|
JOSESMOKE/tear_459 | JOSESMOKE | 2025-04-25T00:15:36Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-04-24T23:59:52Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
PhoenixB/d913eff7-2cc9-433f-9818-6ed25d2c3a79 | PhoenixB | 2025-04-25T00:14:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-25T00:07:45Z | ---
library_name: peft
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d913eff7-2cc9-433f-9818-6ed25d2c3a79
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: qlora
auto_find_batch_size: true
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 40e50bc14394f8e0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/40e50bc14394f8e0_train_data.json
type:
field_input: cons_rejected
field_instruction: main_ins
field_output: cons_chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
eval_max_new_tokens: 128
eval_sample_packing: false
eval_steps: 10
eval_table_size: null
flash_attention: true
fp16: false
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_cpu_ram_efficient_loading: true
fsdp_limit_all_gathers: true
fsdp_offload_params: true
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: false
gpu_memory_limit: 80GiB
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: PhoenixB/d913eff7-2cc9-433f-9818-6ed25d2c3a79
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 2e-4
liger_fused_linear_cross_entropy: true
liger_glu_activation: true
liger_layer_norm: true
liger_rms_norm: true
liger_rope: true
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/40e50bc14394f8e0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 4096
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 69fd4981-ecc5-4b7a-9b51-0df5aa2c5c09
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 69fd4981-ecc5-4b7a-9b51-0df5aa2c5c09
warmup_steps: 5
weight_decay: 0.0
```
</details><br>
# d913eff7-2cc9-433f-9818-6ed25d2c3a79
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 1.3690 |
| 1.2568 | 0.0089 | 10 | 1.2340 |
| 1.1562 | 0.0177 | 20 | 1.1539 |
| 1.1097 | 0.0266 | 30 | 1.1410 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
hackelle/rdnet_base-s1-v0.2.0 | hackelle | 2025-04-24T23:57:49Z | 0 | 0 | configilm | [
"configilm",
"safetensors",
"rdnet_base",
"BigEarthNet v2.0",
"Remote Sensing",
"Classification",
"image-classification",
"Multispectral",
"arxiv:2407.03653",
"license:mit",
"region:us"
] | image-classification | 2025-04-24T23:57:29Z | ---
thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
tags:
- rdnet_base
- BigEarthNet v2.0
- Remote Sensing
- Classification
- image-classification
- Multispectral
library_name: configilm
license: mit
widget:
- src: example.png
example_title: Example
output:
- label: Agro-forestry areas
score: 0.000000
- label: Arable land
score: 0.000006
- label: Beaches, dunes, sands
score: 0.000029
- label: Broad-leaved forest
score: 0.000812
- label: Coastal wetlands
score: 0.000001
---
[TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
:---:|:---:|:---:|:---:|:---:
<a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
# Rdnet_base pretrained on BigEarthNet v2.0 using Sentinel-1 bands
<!-- Optional images -->
<!--
[Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
:---:|:---:
<a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
-->
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-1 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 1000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 31 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.580053 | 0.775468 |
| F1 Score | 0.524612 | 0.681331 |
| Precision | 0.622027 | 0.727463 |
# Example
| A Sentinel-1 image (VV, VH and VV/VH bands are used for visualization) |
|:---------------------------------------------------:|
| ](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 0.000006 <br> 0.000029 <br> ... <br> 0.000004 </p> |
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/rdnet_base-s1-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```
|
maxsegan/gpt2_l1_64_100k | maxsegan | 2025-04-24T23:42:42Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2025-04-24T23:42:22Z | # gpt2_l1_64_100k
## Model Details
- Block size: 1024
- Vocabulary size: 50304
- Layers: 12
- Heads: 12
- Embedding size: 768
|
mergekit-community/mergekit-nuslerp-oejdsoz | mergekit-community | 2025-04-24T23:27:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mergekit-community/mergekit-model_stock-lvezkfe",
"base_model:merge:mergekit-community/mergekit-model_stock-lvezkfe",
"base_model:mergekit-community/mergekit-model_stock-qtseiad",
"base_model:merge:mergekit-community/mergekit-model_stock-qtseiad",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T23:22:59Z | ---
base_model:
- mergekit-community/mergekit-model_stock-lvezkfe
- mergekit-community/mergekit-model_stock-qtseiad
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the NuSLERP merge method.
### Models Merged
The following models were included in the merge:
* [mergekit-community/mergekit-model_stock-lvezkfe](https://huggingface.co/mergekit-community/mergekit-model_stock-lvezkfe)
* [mergekit-community/mergekit-model_stock-qtseiad](https://huggingface.co/mergekit-community/mergekit-model_stock-qtseiad)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mergekit-community/mergekit-model_stock-qtseiad
parameters:
weight: 0.7
- model: mergekit-community/mergekit-model_stock-lvezkfe
parameters:
weight: 0.3
merge_method: nuslerp
dtype: bfloat16
chat_template: "chatml"
tokenizer:
source: union
parameters:
normalize: true
```
|
mlfoundations-dev/b2_science_random_10k | mlfoundations-dev | 2025-04-24T23:17:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T16:45:46Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_random_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_random_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_random_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mlfoundations-dev/b2_science_fasttext_3k | mlfoundations-dev | 2025-04-24T21:56:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T19:35:17Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_fasttext_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_fasttext_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
procrastipredator/gemma-3-kk-qlora-adapter | procrastipredator | 2025-04-24T21:40:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T21:40:10Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** procrastipredator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
exp-models/HyperCLOVA-X-HyperClever-v1-20250425-preview | exp-models | 2025-04-24T21:18:37Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:other",
"region:us"
] | null | 2025-04-24T16:45:37Z | ---
license: other
license_name: hyperclovax-seed
license_link: https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B/raw/main/LICENSE
--- |
filipesantoscv11/46798891-26fe-4665-91ee-a0b1d5b87da3 | filipesantoscv11 | 2025-04-24T20:52:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"base_model:adapter:OpenBuddy/openbuddy-llama2-13b-v8.1-fp16",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-24T20:12:23Z | ---
library_name: peft
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 46798891-26fe-4665-91ee-a0b1d5b87da3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: OpenBuddy/openbuddy-llama2-13b-v8.1-fp16
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 2fc5c2abbeb880c5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2fc5c2abbeb880c5_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/46798891-26fe-4665-91ee-a0b1d5b87da3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/2fc5c2abbeb880c5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b4720579-00f6-47ae-90b5-1d556182ce7f
wandb_project: s56-6
wandb_run: your_name
wandb_runid: b4720579-00f6-47ae-90b5-1d556182ce7f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 46798891-26fe-4665-91ee-a0b1d5b87da3
This model is a fine-tuned version of [OpenBuddy/openbuddy-llama2-13b-v8.1-fp16](https://huggingface.co/OpenBuddy/openbuddy-llama2-13b-v8.1-fp16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9177 | 0.0580 | 200 | 1.0618 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aiguy68/neurosam-model | aiguy68 | 2025-04-24T20:48:31Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-classification",
"region:us"
] | text-classification | 2025-04-24T04:27:24Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
pipeline_tag: text-classification
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF | mradermacher | 2025-04-24T20:37:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Darkhn/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B",
"base_model:quantized:Darkhn/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-24T11:58:45Z | ---
base_model: Darkhn/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Darkhn/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B-GGUF/resolve/main/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kk-aivio/ddce59ea-fa2a-4817-a62e-132769037dd5 | kk-aivio | 2025-04-24T20:17:35Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"region:us"
] | null | 2025-04-24T20:17:18Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Qwen2.5-0.5B-Instruct
model-index:
- name: kk-aivio/ddce59ea-fa2a-4817-a62e-132769037dd5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kk-aivio/ddce59ea-fa2a-4817-a62e-132769037dd5
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
jgerbscheid/segformer_b1-nlver_finetuned-1024-1024 | jgerbscheid | 2025-04-24T20:03:12Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"aerial_photography",
"rgb",
"segmentation",
"netherlands",
"image-segmentation",
"base_model:nvidia/segformer-b1-finetuned-cityscapes-1024-1024",
"base_model:finetune:nvidia/segformer-b1-finetuned-cityscapes-1024-1024",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2025-04-23T12:49:56Z | ---
library_name: transformers
tags:
- aerial_photography
- rgb
- segmentation
- netherlands
license: mit
base_model:
- nvidia/segformer-b1-finetuned-cityscapes-1024-1024
pipeline_tag: image-segmentation
---
# NL-aerial segmentation
This is a segformer segmentation model finetuned on random samples from the entire 41,000 square kilometer of aerial photography data, see [pdok aerial data](https://www.pdok.nl/introductie/-/article/pdok-luchtfoto-rgb-open-), and using the BGT, see [pdok BGT data](https://www.pdok.nl/introductie/-/article/basisregistratie-grootschalige-topografie-bgt-).
Specifically it takes in 1024x1024 aerial photographs taken at a resolution of 8cm/pixel and predicts water, buildings, roads/pavement, vegetation.
This model is part of the NL-veranderdetectie project, [summary](https://www.winnovatie.nl/kennisbank/3041030.aspx?t=Learning-paper-NL-Veranderdetectie-Fase-1) in dutch.
The model was trained using this [codebase](https://gitlab.com/hetwaterschapshuis/kenniscentrum/tooling/nlveranderdetectie/), see the repo for more details.
## Model Details
Regular segformer, with the classification head adjusted to 5 classes.
### Model Description
- **Developed by**: Het [Waterschapshuis](https://www.hetwaterschapshuis.nl/), in collaboration with the Dutch Waterboards [More Information Needed]
<!-- - **Funded by [optional]:** [] -->
<!-- - **Shared by [optional]:** [More Information Needed] -->
- **Model type:** [Segformer Semantic Segmentation Model]
- **License:** [MIT]
- **Finetuned from model:** [nvidia/segformer-b1-finetuned-cityscapes-1024-1024]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://gitlab.com/hetwaterschapshuis/kenniscentrum/tooling/nlveranderdetectie/](https://gitlab.com/hetwaterschapshuis/kenniscentrum/tooling/nlveranderdetectie/)
<!-- - **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed] -->
## Uses
This model is used to create a segmentation map of the dutch waterways based on aerial imagery. This segmentation map is then used to actualize the BGT, a national map maintained by the Dutch government.
#### Training Hyperparameters
See the nlveranderdetectie repostiory for details of the training setup. [https://gitlab.com/hetwaterschapshuis/kenniscentrum/tooling/nlveranderdetectie/](https://gitlab.com/hetwaterschapshuis/kenniscentrum/tooling/nlveranderdetectie/) , specifically look at the yaml configs in the run/configs directory.
### Hardware
The model was trained on a single nvidia 3090 GPU for ~2 days or ~126k forward passes with a batch size of 8.
## Model Card Authors
The model car was written by J. Gerbscheid, email: [email protected]
## Model Card Contact
The model was trained by J. Gerbscheid, email: [email protected]
|
TareksLab/Z-MODEL4-V1-SCE | TareksLab | 2025-04-24T19:32:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:merge:Sao10K/L3-70B-Euryale-v2.1",
"base_model:TareksLab/Fetishist-V5-LLaMA-70B",
"base_model:merge:TareksLab/Fetishist-V5-LLaMA-70B",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:mlabonne/Hermes-3-Llama-3.1-70B-lorablated",
"base_model:merge:mlabonne/Hermes-3-Llama-3.1-70B-lorablated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T19:20:54Z | ---
base_model:
- Sao10K/L3-70B-Euryale-v2.1
- mlabonne/Hermes-3-Llama-3.1-70B-lorablated
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- TareksLab/Fetishist-V5-LLaMA-70B
library_name: transformers
tags:
- mergekit
- merge
---
# MERGE4
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [mlabonne/Hermes-3-Llama-3.1-70B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1)
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
* [TareksLab/Fetishist-V5-LLaMA-70B](https://huggingface.co/TareksLab/Fetishist-V5-LLaMA-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
select_topk: 0.7
- model: TareksLab/Fetishist-V5-LLaMA-70B
parameters:
select_topk: 0.7
- model: Sao10K/L3-70B-Euryale-v2.1
parameters:
select_topk: 0.7
- model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
parameters:
select_topk: 0.7
base_model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated
merge_method: sce
parameters:
int8_mask: true
tokenizer:
source: TareksLab/Fetishist-V5-LLaMA-70B
chat_template: llama3
dtype: bfloat16
```
|
dgambettaphd/M_llm3_gen6_run0_X_doc1000_synt64_tot128_SYNLAST | dgambettaphd | 2025-04-24T17:56:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T17:56:03Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B_EXL2_5bpw_H8 | ReadyArt | 2025-04-24T17:29:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Mawdistical/Pawdistic-FurMittens-24B",
"base_model:merge:Mawdistical/Pawdistic-FurMittens-24B",
"base_model:ReadyArt/Forgotten-Safeword-24B-v4.0",
"base_model:merge:ReadyArt/Forgotten-Safeword-24B-v4.0",
"base_model:ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B",
"base_model:merge:ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B",
"base_model:ReadyArt/The-Omega-Directive-M-24B-v1.1",
"base_model:merge:ReadyArt/The-Omega-Directive-M-24B-v1.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2025-04-24T17:20:18Z | ---
base_model:
- ReadyArt/The-Omega-Directive-M-24B-v1.1
- ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B
- ReadyArt/Forgotten-Safeword-24B-v4.0
- Mawdistical/Pawdistic-FurMittens-24B
library_name: transformers
tags:
- mergekit
- merge
---
# output-model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B](https://huggingface.co/ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B) as a base.
### Models Merged
The following models were included in the merge:
* [ReadyArt/The-Omega-Directive-M-24B-v1.1](https://huggingface.co/ReadyArt/The-Omega-Directive-M-24B-v1.1)
* [ReadyArt/Forgotten-Safeword-24B-v4.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-v4.0)
* [Mawdistical/Pawdistic-FurMittens-24B](https://huggingface.co/Mawdistical/Pawdistic-FurMittens-24B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B
models:
- model: ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B
parameters:
weight: 0.25
- model: ReadyArt/Forgotten-Safeword-24B-v4.0
parameters:
weight: 0.25
- model: ReadyArt/The-Omega-Directive-M-24B-v1.1
parameters:
weight: 0.25
- model: Mawdistical/Pawdistic-FurMittens-24B
parameters:
weight: 0.25
parameters:
density: 0.3
tokenizer:
source: union
chat_template: auto
```
|
keioio/chatbot_mental_health_using_LLaMa_3.1_instruct_v2 | keioio | 2025-04-24T17:24:39Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T15:59:43Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** keioio
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF | mradermacher | 2025-04-24T16:51:33Z | 200 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion",
"base_model:quantized:Nexesenex/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-12T04:36:50Z | ---
base_model: Nexesenex/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion-GGUF/resolve/main/Llama_3.x_70b_L3.3-calme-2.3_abliterated_fusion.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
moyixiao/llama3_8b_badam2 | moyixiao | 2025-04-24T16:50:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T16:29:02Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nathanialhunt2000/a9262989-7dda-4dba-8926-e1cef502d6a6 | nathanialhunt2000 | 2025-04-24T16:44:07Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T16:43:17Z | ---
library_name: transformers
model_name: nathanialhunt2000/a9262989-7dda-4dba-8926-e1cef502d6a6
tags:
- generated_from_trainer
licence: license
---
# Model Card for nathanialhunt2000/a9262989-7dda-4dba-8926-e1cef502d6a6
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ezrab/ppo-LunarLander-v2-unit8-3 | ezrab | 2025-04-24T16:29:21Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-04-24T16:29:04Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.07 +/- 91.88
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'LunarLander'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 5000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 1024
'anneal_lr': True
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 16
'update_epochs': 8
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'batch_size': 4096
'minibatch_size': 256
'num_iterations': 1220}
```
|
JeorgeC/BocchiGemma3_4BVer01-finetuned | JeorgeC | 2025-04-24T16:23:12Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T06:53:45Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** JeorgeC
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF | mradermacher | 2025-04-24T12:17:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:minpeter/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf",
"base_model:quantized:minpeter/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-24T11:55:59Z | ---
base_model: minpeter/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: hyperclovax-seed
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/minpeter/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ2_S.gguf) | i1-IQ2_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ2_M.gguf) | i1-IQ2_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf-i1-GGUF/resolve/main/HyperCLOVAX-SEED-Text-Instruct-0.5B-hf.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
VParkar/VParkar | VParkar | 2025-04-24T11:52:25Z | 0 | 0 | peft | [
"peft",
"llava",
"region:us"
] | null | 2025-04-24T11:51:55Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
aimagelab/LLaVA_MORE-llama_3_1-8B-S2-pretrain | aimagelab | 2025-04-24T11:51:54Z | 2 | 0 | transformers | [
"transformers",
"llava_llama",
"text-generation",
"image-text-to-text",
"dataset:liuhaotian/LLaVA-CC3M-Pretrain-595K",
"arxiv:2503.15621",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-08-16T15:03:04Z | ---
license: apache-2.0
datasets:
- liuhaotian/LLaVA-CC3M-Pretrain-595K
library_name: transformers
pipeline_tag: image-text-to-text
---
# Model Card: LLaVA_MORE-llama_3_1-8B-S2-pretrain
```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.
In this model space, you will find the stage one (pretrain) weights of LLaVA-MORE LLaMA 3.1 8B.
For more information, visit our [LLaVA-MORE](https://github.com/aimagelab/LLaVA-MORE) repository.
## Citation
If you make use of our work, please cite our repo:
```bibtex
@article{cocchi2025llava,
title={{LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning}},
author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Baraldi, Lorenzo and Cornia, Marcella and Cucchiara, Rita},
journal={arXiv preprint arXiv:2503.15621},
year={2025}
}
``` |
Homebax/axionis-0.5-flash | Homebax | 2025-04-24T11:13:17Z | 0 | 0 | allennlp | [
"allennlp",
"any-to-any",
"cs",
"sk",
"pl",
"dataset:nvidia/OpenCodeReasoning",
"dataset:nvidia/Llama-Nemotron-Post-Training-Dataset",
"dataset:PJMixers-Dev/nvidia_Llama-Nemotron-Post-Training-Dataset-v1-partial-code",
"dataset:facebook/natural_reasoning",
"base_model:Qwen/Qwen2.5-Omni-7B",
"base_model:finetune:Qwen/Qwen2.5-Omni-7B",
"license:apache-2.0",
"region:us"
] | any-to-any | 2025-04-24T11:10:13Z | ---
license: apache-2.0
datasets:
- nvidia/OpenCodeReasoning
- nvidia/Llama-Nemotron-Post-Training-Dataset
- PJMixers-Dev/nvidia_Llama-Nemotron-Post-Training-Dataset-v1-partial-code
- facebook/natural_reasoning
language:
- cs
- sk
- pl
metrics:
- charcut_mt
- bertscore
- accuracy
- bleu
base_model:
- deepseek-ai/DeepSeek-V3-0324
- meta-llama/Llama-4-Scout-17B-16E-Instruct
- microsoft/bitnet-b1.58-2B-4T
- Qwen/Qwen2.5-Omni-7B
new_version: meta-llama/Llama-4-Scout-17B-16E-Instruct
pipeline_tag: any-to-any
library_name: allennlp
--- |
mradermacher/DoodDoodGRPO-GGUF | mradermacher | 2025-04-24T10:59:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"grpo",
"en",
"base_model:DoodDood/DoodDoodGRPO",
"base_model:quantized:DoodDood/DoodDoodGRPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-24T10:58:36Z | ---
base_model: DoodDood/DoodDoodGRPO
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DoodDood/DoodDoodGRPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DoodDoodGRPO-GGUF/resolve/main/DoodDoodGRPO.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
genki10/BERT_V8_sp10_lw40_ex30_lo00_k1_k1_fold0 | genki10 | 2025-04-24T10:30:45Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-24T10:16:50Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex30_lo00_k1_k1_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex30_lo00_k1_k1_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9808
- Qwk: 0.3379
- Mse: 0.9808
- Rmse: 0.9904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 1.0 | 2 | 8.9702 | 0.0 | 8.9702 | 2.9950 |
| No log | 2.0 | 4 | 8.0136 | 0.0 | 8.0136 | 2.8308 |
| No log | 3.0 | 6 | 6.9437 | 0.0 | 6.9437 | 2.6351 |
| No log | 4.0 | 8 | 6.3664 | -0.0033 | 6.3664 | 2.5232 |
| No log | 5.0 | 10 | 5.5572 | 0.0119 | 5.5572 | 2.3574 |
| No log | 6.0 | 12 | 4.8767 | 0.0115 | 4.8767 | 2.2083 |
| No log | 7.0 | 14 | 4.1298 | 0.0039 | 4.1298 | 2.0322 |
| No log | 8.0 | 16 | 3.4601 | 0.0 | 3.4601 | 1.8601 |
| No log | 9.0 | 18 | 2.7786 | 0.0 | 2.7786 | 1.6669 |
| No log | 10.0 | 20 | 2.1818 | 0.0897 | 2.1818 | 1.4771 |
| No log | 11.0 | 22 | 1.7969 | 0.0316 | 1.7969 | 1.3405 |
| No log | 12.0 | 24 | 1.5506 | 0.0316 | 1.5506 | 1.2452 |
| No log | 13.0 | 26 | 1.2899 | 0.0316 | 1.2899 | 1.1357 |
| No log | 14.0 | 28 | 1.1221 | 0.0316 | 1.1221 | 1.0593 |
| No log | 15.0 | 30 | 1.0436 | 0.0316 | 1.0436 | 1.0215 |
| No log | 16.0 | 32 | 0.8914 | 0.0426 | 0.8914 | 0.9442 |
| No log | 17.0 | 34 | 0.9683 | 0.0141 | 0.9683 | 0.9840 |
| No log | 18.0 | 36 | 0.8230 | 0.2422 | 0.8230 | 0.9072 |
| No log | 19.0 | 38 | 1.1504 | 0.0538 | 1.1504 | 1.0726 |
| No log | 20.0 | 40 | 1.3972 | 0.0909 | 1.3972 | 1.1820 |
| No log | 21.0 | 42 | 0.8380 | 0.2887 | 0.8380 | 0.9154 |
| No log | 22.0 | 44 | 0.7439 | 0.2226 | 0.7439 | 0.8625 |
| No log | 23.0 | 46 | 0.8533 | 0.1053 | 0.8533 | 0.9238 |
| No log | 24.0 | 48 | 0.7797 | 0.1927 | 0.7797 | 0.8830 |
| No log | 25.0 | 50 | 0.6253 | 0.3473 | 0.6253 | 0.7908 |
| No log | 26.0 | 52 | 0.9396 | 0.2916 | 0.9396 | 0.9693 |
| No log | 27.0 | 54 | 1.4233 | 0.1837 | 1.4233 | 1.1930 |
| No log | 28.0 | 56 | 1.1770 | 0.1797 | 1.1770 | 1.0849 |
| No log | 29.0 | 58 | 0.6667 | 0.4945 | 0.6667 | 0.8165 |
| No log | 30.0 | 60 | 0.5842 | 0.3501 | 0.5842 | 0.7644 |
| No log | 31.0 | 62 | 0.6174 | 0.3387 | 0.6174 | 0.7858 |
| No log | 32.0 | 64 | 0.6236 | 0.3670 | 0.6236 | 0.7897 |
| No log | 33.0 | 66 | 0.6669 | 0.4042 | 0.6669 | 0.8166 |
| No log | 34.0 | 68 | 0.7465 | 0.3602 | 0.7465 | 0.8640 |
| No log | 35.0 | 70 | 0.8770 | 0.3317 | 0.8770 | 0.9365 |
| No log | 36.0 | 72 | 0.9559 | 0.3140 | 0.9559 | 0.9777 |
| No log | 37.0 | 74 | 0.8209 | 0.3280 | 0.8209 | 0.9060 |
| No log | 38.0 | 76 | 0.7384 | 0.3321 | 0.7384 | 0.8593 |
| No log | 39.0 | 78 | 0.7857 | 0.3287 | 0.7857 | 0.8864 |
| No log | 40.0 | 80 | 1.0877 | 0.2729 | 1.0877 | 1.0429 |
| No log | 41.0 | 82 | 1.5115 | 0.1696 | 1.5115 | 1.2294 |
| No log | 42.0 | 84 | 1.7673 | 0.1822 | 1.7673 | 1.3294 |
| No log | 43.0 | 86 | 1.7437 | 0.1824 | 1.7437 | 1.3205 |
| No log | 44.0 | 88 | 1.4460 | 0.2016 | 1.4460 | 1.2025 |
| No log | 45.0 | 90 | 1.0318 | 0.2650 | 1.0318 | 1.0158 |
| No log | 46.0 | 92 | 0.7998 | 0.3472 | 0.7998 | 0.8943 |
| No log | 47.0 | 94 | 0.7302 | 0.3747 | 0.7302 | 0.8545 |
| No log | 48.0 | 96 | 0.7029 | 0.3875 | 0.7029 | 0.8384 |
| No log | 49.0 | 98 | 0.7327 | 0.3633 | 0.7327 | 0.8560 |
| No log | 50.0 | 100 | 0.7715 | 0.4094 | 0.7715 | 0.8784 |
| No log | 51.0 | 102 | 0.8248 | 0.3814 | 0.8248 | 0.9082 |
| No log | 52.0 | 104 | 0.8408 | 0.3966 | 0.8408 | 0.9170 |
| No log | 53.0 | 106 | 0.8438 | 0.3986 | 0.8438 | 0.9186 |
| No log | 54.0 | 108 | 0.8196 | 0.4153 | 0.8196 | 0.9053 |
| No log | 55.0 | 110 | 0.8260 | 0.4120 | 0.8260 | 0.9088 |
| No log | 56.0 | 112 | 0.8530 | 0.4091 | 0.8530 | 0.9236 |
| No log | 57.0 | 114 | 0.9044 | 0.3744 | 0.9044 | 0.9510 |
| No log | 58.0 | 116 | 0.9406 | 0.3550 | 0.9406 | 0.9699 |
| No log | 59.0 | 118 | 0.9808 | 0.3379 | 0.9808 | 0.9904 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
crossdelenna/orpheus_lora_model | crossdelenna | 2025-04-24T09:59:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T09:59:46Z | ---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** crossdelenna
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ASethi04/google-gemma-2-9b-legalbench-1-lora-1-0.004 | ASethi04 | 2025-04-24T09:49:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T09:49:16Z | ---
base_model: google/gemma-2-9b
library_name: transformers
model_name: google-gemma-2-9b-legalbench-1-lora-1-0.004
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for google-gemma-2-9b-legalbench-1-lora-1-0.004
This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/google-gemma-2-9b-legalbench-1-lora-1-0.004", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/gqaoyfvf)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Raghavi20/whisper-tiny-minds14-enUS | Raghavi20 | 2025-04-24T09:08:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-24T06:37:05Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14-enUS
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.29071332436069985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14-enUS
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6872
- Wer: 0.2907
- Wer Ortho: 0.2907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Wer Ortho |
|:-------------:|:-------:|:----:|:---------------:|:------:|:---------:|
| 0.8922 | 1.7241 | 50 | 0.7444 | 0.3378 | 0.3378 |
| 0.3603 | 3.4483 | 100 | 0.5051 | 0.3048 | 0.3048 |
| 0.1591 | 5.1724 | 150 | 0.5103 | 0.3028 | 0.3028 |
| 0.0449 | 6.8966 | 200 | 0.5746 | 0.2941 | 0.2941 |
| 0.0104 | 8.6207 | 250 | 0.6113 | 0.3022 | 0.3022 |
| 0.0035 | 10.3448 | 300 | 0.6488 | 0.2867 | 0.2867 |
| 0.0011 | 12.0690 | 350 | 0.6746 | 0.2907 | 0.2907 |
| 0.0008 | 13.7931 | 400 | 0.6828 | 0.2894 | 0.2894 |
| 0.0007 | 15.5172 | 450 | 0.6872 | 0.2907 | 0.2907 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Redeem-Craze-viral-video-original/exclusive-clip.Redeem.Craze.viral.video.leaked.full.hd | Redeem-Craze-viral-video-original | 2025-04-24T08:57:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-24T08:57:13Z |
<a href="https://sdu.sk/9Ip"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
Sophie-Rain-Spider-ManX-Videos/Sophie.Rain.Spider-Man.Video.Link | Sophie-Rain-Spider-ManX-Videos | 2025-04-24T08:40:29Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-24T08:40:03Z |
<a href="https://sdu.sk/9Ip"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
saiteki-kai/DeBERTa-QA-002 | saiteki-kai | 2025-04-24T06:09:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"multi-label",
"question-answering",
"generated_from_trainer",
"dataset:beavertails",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-23T22:04:35Z | ---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-large
tags:
- multi-label
- question-answering
- text-classification
- generated_from_trainer
datasets:
- beavertails
metrics:
- accuracy
model-index:
- name: DeBERTa-QA-002
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: saiteki-kai/Beavertails-it
type: beavertails
metrics:
- name: Accuracy
type: accuracy
value: 0.6890740925574741
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-QA-002
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the saiteki-kai/Beavertails-it dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0838
- Accuracy: 0.6891
- Macro F1: 0.6334
- Macro Precision: 0.7352
- Macro Recall: 0.6002
- Micro F1: 0.7504
- Micro Precision: 0.7872
- Micro Recall: 0.7169
- Flagged/accuracy: 0.8507
- Flagged/precision: 0.8907
- Flagged/recall: 0.8341
- Flagged/f1: 0.8614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 | Macro Precision | Macro Recall | Micro F1 | Micro Precision | Micro Recall | Flagged/accuracy | Flagged/precision | Flagged/recall | Flagged/f1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|
| 0.0929 | 1.0 | 16907 | 0.1063 | 0.6411 | 0.3931 | 0.4731 | 0.3666 | 0.6953 | 0.7654 | 0.6369 | 0.8122 | 0.8771 | 0.7705 | 0.8204 |
| 0.089 | 2.0 | 33814 | 0.0917 | 0.6763 | 0.5366 | 0.6156 | 0.4844 | 0.7248 | 0.8028 | 0.6606 | 0.8320 | 0.9007 | 0.7846 | 0.8386 |
| 0.0725 | 3.0 | 50721 | 0.0882 | 0.6800 | 0.6017 | 0.6800 | 0.5624 | 0.7390 | 0.7848 | 0.6982 | 0.8421 | 0.8921 | 0.8148 | 0.8517 |
| 0.0743 | 4.0 | 67628 | 0.0861 | 0.6808 | 0.6195 | 0.6776 | 0.5858 | 0.7450 | 0.7799 | 0.7131 | 0.8461 | 0.8833 | 0.8336 | 0.8578 |
| 0.0771 | 5.0 | 84535 | 0.0852 | 0.6826 | 0.6230 | 0.6794 | 0.5943 | 0.7473 | 0.7778 | 0.7190 | 0.8480 | 0.8812 | 0.8401 | 0.8602 |
| 0.0739 | 6.0 | 101442 | 0.0851 | 0.6874 | 0.6250 | 0.7573 | 0.5892 | 0.7477 | 0.7861 | 0.7129 | 0.8489 | 0.8896 | 0.8317 | 0.8597 |
| 0.0679 | 7.0 | 118349 | 0.0843 | 0.6879 | 0.6225 | 0.7252 | 0.5896 | 0.7484 | 0.7882 | 0.7125 | 0.8489 | 0.8894 | 0.8320 | 0.8597 |
| 0.0814 | 8.0 | 135256 | 0.0841 | 0.6870 | 0.6338 | 0.7276 | 0.6052 | 0.7503 | 0.7838 | 0.7196 | 0.8503 | 0.8864 | 0.8384 | 0.8617 |
| 0.0725 | 9.0 | 152163 | 0.0838 | 0.6903 | 0.6319 | 0.7376 | 0.5937 | 0.7494 | 0.7915 | 0.7115 | 0.8504 | 0.8951 | 0.8282 | 0.8604 |
| 0.0784 | 10.0 | 169070 | 0.0838 | 0.6891 | 0.6334 | 0.7351 | 0.6002 | 0.7504 | 0.7871 | 0.7169 | 0.8507 | 0.8907 | 0.8340 | 0.8614 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu118
- Datasets 3.5.0
- Tokenizers 0.21.0
|
allen9926/DSAA6000_Assignment4_dpo | allen9926 | 2025-04-24T05:54:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-24T05:27:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Romain-XV/ac76bbec-f096-474f-bd2f-1edbcb51ec8d | Romain-XV | 2025-04-24T04:55:01Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"phi3",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"custom_code",
"arxiv:2305.18290",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T02:08:55Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: ac76bbec-f096-474f-bd2f-1edbcb51ec8d
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for ac76bbec-f096-474f-bd2f-1edbcb51ec8d
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Romain-XV/ac76bbec-f096-474f-bd2f-1edbcb51ec8d", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/romain_fnc-xventures/Gradients-On-Demand/runs/5wts7n6q)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Amyww/zxcvbnmlkjhgfdsaqwertyuiop0987654321qwertyuioplku7yyy | Amyww | 2025-04-24T03:23:50Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-24T03:23:50Z | ---
license: bigcode-openrail-m
---
|
TOMFORD79/10 | TOMFORD79 | 2025-04-24T02:40:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T17:10:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
facebook/webssl-mae300m-full2b-224 | facebook | 2025-04-24T02:30:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-feature-extraction",
"arxiv:2504.01017",
"license:cc-by-nc-4.0",
"region:us"
] | image-feature-extraction | 2025-04-22T01:37:58Z | ---
library_name: transformers
license: cc-by-nc-4.0
inference: false
---
# Web-SSL MAE ViT-L (300M): 2B MetaCLIP data, 224 Resolution
A 300 million parameter Vision Transformer (ViT-L) trained with Masked Autoencoder (MAE) self-supervised learning on web-scale image data without language supervision. Introduced in ["Scaling Language-Free Visual Representation Learning"](https://arxiv.org/abs/2504.01017) (Fan et al., 2025).
## Model Details
- **Architecture**: ViT-L (Large)
- **Parameters**: 300M
- **Resolution**: 224×224 pixels
- **Training**: Self-supervised Web-MAE on 2B image samples from MetaCLIP web data
## Model Descriptions
Web-SSL MAE ViT-L is a 300 million parameter Vision Transformer model trained using masked autoencoder self-supervised learning on 2 billion web images without language supervision. This model demonstrates that pure visual learning, when scaled appropriately, can match or exceed the performance of language-supervised models like CLIP across various vision tasks. As part of the Web-SSL family, it provides an efficient option while still maintaining strong visual representation capabilities.
<img src="webssl_teaser.png" alt="WebSSL Model Overview" width="600">
## Usage
```python
from transformers import AutoImageProcessor, ViTModel
import torch
from PIL import Image
# Adjust the size, crop_size, etc. fields to your liking
processor = AutoImageProcessor.from_pretrained('facebook/webssl-mae300m-full2b-224')
model = ViTModel.from_pretrained('facebook/webssl-mae300m-full2b-224').cuda().eval()
# Process an image
image = Image.open('path/to/image.jpg')
inputs = processor(images=image, return_tensors="pt").to('cuda')
with torch.no_grad():
outputs = model(**inputs)
# Extract features from the encoder
encoder_hidden_states = outputs.last_hidden_state
```
## Citation
```bibtex
@article{fan2025scaling,
title={Scaling Language-Free Visual Representation Learning},
author={David Fan and Shengbang Tong and Jiachen Zhu and Koustuv Sinha and Zhuang Liu and Xinlei Chen and Michael Rabbat and Nicolas Ballas and Yann LeCun and Amir Bar and Saining Xie},
year={2025},
eprint={2504.01017},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_prickly_ape | 0xshaf | 2025-04-24T01:55:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am opaque prickly ape",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-24T01:53:35Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_prickly_ape
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am opaque prickly ape
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_prickly_ape
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-opaque_prickly_ape", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Nitral-Archive/vmc-12B-0.4-lora | Nitral-Archive | 2025-04-23T21:45:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Nitral-Archive/vmc-12B-0.3",
"base_model:adapter:Nitral-Archive/vmc-12B-0.3",
"region:us"
] | null | 2025-04-23T21:45:01Z | ---
base_model: Nitral-AI/vmc-12B-0.3
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Rey25/merged_lora_llama | Rey25 | 2025-04-23T20:41:04Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-04-23T20:37:14Z | ---
license: apache-2.0
---
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient-0420125020-epoch-8 | vectorzhou | 2025-04-23T20:17:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T20:14:30Z | ---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient-0420125020-epoch-8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zhourunlongvector/nlhf/runs/855dswk0)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.0
- Pytorch: 2.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MRMATE/MATE | MRMATE | 2025-04-23T18:16:42Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-23T18:16:42Z | ---
license: apache-2.0
---
|
NbAiLab/whisper-small-nob | NbAiLab | 2025-04-23T18:12:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"norwegian",
"no",
"dataset:NbAiLab/NCC_S",
"dataset:NbAiLab/NPSC",
"dataset:NbAiLab/NST",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-15T10:55:19Z | ---
language:
- no
license: apache-2.0
tags:
- whisper-event
- norwegian
datasets:
- NbAiLab/NCC_S
- NbAiLab/NPSC
- NbAiLab/NST
metrics:
- wer
model-index:
- name: Whisper Small Norwegian Bokmål
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS
type: google/fleurs
config: nb_no
split: validation
args: nb_no
metrics:
- name: Wer
type: wer
value: 15.56
---
# Whisper Small Norwegian Bokmål
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) trained on several datasets.
It is currently in the middle of a large training. Currently it achieves the following results on the evaluation set:
- Loss: 0.3230
- Wer: 15.56
## Model description
The model is trained on a large corpus of roughly 5.000 hours of voice. The sources are subtitles from the Norwegian broadcaster NRK, transcribed speeches from the Norwegian parliament and voice recordings from Norsk Språkteknologi.
## Intended uses & limitations
The model will be free for everyone to use when it is finished.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 128
- gradient_accumulation_steps: 2
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant with warmup
- lr_scheduler_warmup_steps: 1000
- training_steps: 50.000 (currently @1.000)
- mixed_precision_training: fp16
- deepspeed: true
### Live Training results
See [Tensorboad Metrics](https://huggingface.co/NbAiLab/whisper-small-nob/tensorboard)
|
Genereux-akotenou/yolos-headwear | Genereux-akotenou | 2025-04-23T17:05:13Z | 214 | 0 | transformers | [
"transformers",
"pytorch",
"yolos",
"object-detection",
"YOLOS",
"Object detection",
"KYC",
"Face detection",
"en",
"dataset:detection-datasets/fashionpedia",
"base_model:hustvl/yolos-small",
"base_model:finetune:hustvl/yolos-small",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-04-21T15:23:22Z | ---
datasets:
- detection-datasets/fashionpedia
language:
- en
pipeline_tag: object-detection
tags:
- YOLOS
- Object detection
- KYC
- Face detection
base_model:
- hustvl/yolos-small
library_name: transformers
---
# YOLO-Face-4-KYC
A fine-tuned YOLO model for detecting facial features and personal headwear, including glasses, hats, and masks — optimized for identity verification and KYC (Know Your Customer) applications. This model is built upon the YOLOS architecture and trained on a filtered and relabeled subset of the [Fashionpedia dataset](https://huggingface.co/datasets/detection-datasets/fashionpedia), focusing exclusively on categories relevant to liveness verification and facial compliance checks. For more details of the implementation, check out the source code [here](https://github.com/Genereux-akotenou/Yolo-Face-4-Kyc)
## Supported Categories
The model detects the following classes:
```python
CATEGORIES = [
'face',
'glasses',
'hat',
'mask',
'headband',
'head covering'
]
```
<img src="https://raw.githubusercontent.com/genereux-akotenou/Yolo-Face-4-Kyc/master/docs-utils/img1.png" style="width: 15em;"/> |
RRashmini/google-unimax-t5-small-7 | RRashmini | 2025-04-23T13:32:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"umt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-23T13:31:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sophie-Rain-Sophie-Rain-Spiderman-Video-9/Sophie.Rain.SpiderMan.Video.Official.Leaks | Sophie-Rain-Sophie-Rain-Spiderman-Video-9 | 2025-04-23T13:15:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-23T13:15:12Z |
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
03 seconds ago
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
jaimetorre/bears | jaimetorre | 2025-04-23T13:09:54Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2025-04-23T13:09:51Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
secmlr/SWE-BENCH-5k-first-2000-claude-3in1-localization_qwen_code_7B_5k_first_2000 | secmlr | 2025-04-23T11:46:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T09:02:37Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: SWE-BENCH-5k-first-2000-claude-3in1-localization_qwen_code_7B_5k_first_2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SWE-BENCH-5k-first-2000-claude-3in1-localization_qwen_code_7B_5k_first_2000
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the SWE-BENCH-5k-first-2000-claude-3in1-localization dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 12
- total_train_batch_size: 48
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
flote2025/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_nasty_panda | flote2025 | 2025-04-23T09:42:43Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am wiry nasty panda",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T09:33:35Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_nasty_panda
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am wiry nasty panda
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_nasty_panda
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="flote2025/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_nasty_panda", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
fbaldassarri/ibm-granite_granite-3.3-2b-instruct-autogptq-int4-gs64-asym | fbaldassarri | 2025-04-22T21:49:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"granite",
"text-generation",
"granite-3.3",
"autoround",
"auto-round",
"intel-autoround",
"intel",
"woq",
"gptq",
"autogptq",
"auto-gptq",
"pytorch",
"ibm",
"granite-3",
"conversational",
"en",
"es",
"fr",
"de",
"pt",
"ja",
"it",
"zh",
"ko",
"ar",
"cs",
"nl",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-04-22T21:49:01Z | ---
language:
- en
- es
- fr
- de
- pt
- ja
- it
- zh
- ko
- ar
- cs
- nl
pipeline_tag: text-generation
license: apache-2.0
library_name: transformers
tags:
- granite-3.3
- autoround
- auto-round
- intel-autoround
- intel
- woq
- gptq
- autogptq
- auto-gptq
- pytorch
- ibm
- granite
- granite-3
model_name: Granite 3.3 2b instruct
base_model:
- ibm-granite/granite-3.3-2b-instruct
inference: false
model_creator: ibm-granite
prompt_template: '{prompt}'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [ibm-granite/granite-3.3-2b-instruct](https://huggingface.co/fbaldassarri/ibm-granite/granite-3.3-2b-instruct) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 64
- Asymmetrical Quantization
- Method AutoGPTQ
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.7
Note: this INT4 version of granite-3.3-2b-instruct has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.7.tar.gz
tar -xvzf v0.4.7.tar.gz
cd auto-round-0.4.7
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ibm-granite/granite-3.3-2b-instruct"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device = 4, 64, False, 'cpu'
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device)
autoround.quantize()
output_dir = "./AutoRound/ibm-granite_granite-3.3-2b-instruct-autogptq-int4-gs64-asym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
diffusionai/detr-face-detection | diffusionai | 2025-04-22T18:38:56Z | 189 | 1 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"face",
"detection",
"face detection",
"en",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-05-25T15:58:17Z | ---
license: creativeml-openrail-m
language:
- en
pipeline_tag: object-detection
tags:
- face
- detection
- face detection
--- |
Draq10/llama3.1-POMI-16bitGGUF-HF | Draq10 | 2025-04-22T16:12:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.1-8B",
"base_model:quantized:unsloth/Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-22T16:12:16Z | ---
base_model: unsloth/Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Draq10
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xw17/Llama-3.2-3B-Instruct_finetuned__optimized_lora_globem_aug | xw17 | 2025-04-21T21:33:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T21:33:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/huihui-ai.Qwen2.5-1.5B-Instruct-abliterated-SFT-GGUF | DevQuasar | 2025-04-21T12:09:55Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:huihui-ai/Qwen2.5-1.5B-Instruct-abliterated-SFT",
"base_model:quantized:huihui-ai/Qwen2.5-1.5B-Instruct-abliterated-SFT",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-21T11:59:58Z | ---
base_model:
- huihui-ai/Qwen2.5-1.5B-Instruct-abliterated-SFT
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [huihui-ai/Qwen2.5-1.5B-Instruct-abliterated-SFT](https://huggingface.co/huihui-ai/Qwen2.5-1.5B-Instruct-abliterated-SFT)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
ASethi04/llama-3.1-8b-legalbench-privacy_policy_qa-lora-gaussian | ASethi04 | 2025-04-21T09:33:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T08:49:00Z | ---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: llama-3.1-8b-legalbench-privacy_policy_qa-lora-gaussian
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama-3.1-8b-legalbench-privacy_policy_qa-lora-gaussian
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/llama-3.1-8b-legalbench-privacy_policy_qa-lora-gaussian", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/z8bpi4vg)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nomadrp/mdpo-v7 | nomadrp | 2025-04-21T05:23:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T05:06:54Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: mdpo-v7
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for mdpo-v7
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nomadrp/mdpo-v7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.0.dev0
- Transformers: 4.48.2
- Pytorch: 2.2.0+cu118
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jnjj/one-layer-gpt2-colab-v2-20250420_072746 | jnjj | 2025-04-20T07:30:41Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-20T07:30:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DhulipallaGopiChandu/wav2vec2-base-960h | DhulipallaGopiChandu | 2025-04-19T10:06:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-19T10:05:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
annaren/Original-video-anna-renn-annarenn-annarennn-anna-ren-annarennnn-anna-renn-reddit-annarennsworld | annaren | 2025-04-18T03:42:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-18T03:41:56Z | Watch 🟢 ➤ ➤ ➤ 🌐 https://shiningwalls.com/fgyhjsjs
🔴 ➤►DOWNLOAD👉👉 https://shiningwalls.com/fgyhjsjs |
emre/gemma-3-27b-it-tr-reasoning40k-4bit | emre | 2025-04-17T00:01:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"tr",
"base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"license:afl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-11T10:45:52Z | ---
base_model: unsloth/gemma-3-27b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: afl-3.0
language:
- en
- tr
---
# Uploaded model
- **Developed by:** emre
- **Finetuned from model :** unsloth/gemma-3-27b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
## Preliminary Evaluation Results / Leaderboard (Unofficial)
**English version is given below.**
Aşağıda, TARA v1 veri seti üzerinde değerlendirilen bazı modellerin ilk sonuçları gösterilmektedir. Bu sonuçlar, belirtilen değerlendirici model (`gemini-2-flash`) kullanılarak `success_rate (%)` metriğine göre hesaplanmıştır. Bu tablo resmi bir leaderboard değildir ancak modellerin farklı akıl yürütme alanlarındaki göreceli performansını göstermeyi amaçlamaktadır.
* **Değerlendirici Model:** `gemini-2-flash`
* **Metrik:** `success_rate (%)` (Başarı Oranı %)
| Model | Bilimsel (RAG) (%) | Etik (%) | Senaryo (%) | Yaratıcı (%) | Mantıksal (%) | Matematik (%) | Planlama (%) | Python (%) | SQL (%) | Tarihsel (RAG) (%) | Genel Başarı (%) |
| :------------------------------------------------------------------------------- | :----------------: | :------: | :---------: | :----------: | :-----------: | :-----------: | :----------: | :--------: | :-----: | :----------------: | :--------------: |
| [emre/gemma-3-4b-it-tr-reasoning40k](https://huggingface.co/emre/gemma-3-4b-it-tr-reasoning40k) | 73.64 | 62.73 | 60.91 | 48.18 | 60.00 | 38.18 | 51.82 | 35.45 | 41.82 | 75.45 | **54.82** |
| [unsloth/gemma-3-4b-it](https://huggingface.co/unsloth/gemma-3-4b-it) | 62.73 | 74.55 | 88.18 | 58.18 | 71.82 | 59.09 | 41.82 | 70.91 | 41.82 | 95.45 | **66.45** |
| [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) | 63.64 | 46.36 | 47.27 | 40.00 | 54.55 | 27.27 | 17.27 | 33.64 | 30.00 | 53.64 | **41.36** |
| [emre/gemma-7b-it-Turkish-Reasoning-FT-smol](https://huggingface.co/emre/gemma-7b-it-Turkish-Reasoning-FT-smol) | 52.73 | 42.73 | 45.45 | 21.82 | 39.09 | 33.64 | 28.18 | 30.00 | 30.00 | 60.91 | **38.45** |
| [emre/gemma-3-12b-it-tr-reasoning40k](https://huggingface.co/emre/gemma-3-12b-it-tr-reasoning40k) | 92.73 | 70.91 | 86.36 | 62.73 | 71.82 | 83.64 | 60.00 | 92.73 | 55.45 | 79.09 | **75.55** |
| [unsloth/gemma-3-12b-it-tr](https://huggingface.co/unsloth/gemma-3-12b-it) | 85.45 | 93.64 | 93.64 | 68.18 | 77.27 | 62.73 | 53.64 | 86.36 | 61.82 | 95.45 | **77.82** |
| [emre/gemma-3-12b-ft-tr-reasoning40k](https://huggingface.co/emre/gemma-3-12b-ft-tr-reasoning40k) | 86.36 | 68.18 | 77.27 | 54.55 | 47.27 | 50.91 | 43.64 | 59.09 | 23.64 | 85.55 | **59.55** |
| [emre/gemma-3-27b-it-tr-reasoning40k-4bit](https://huggingface.co/emre/gemma-3-27b-it-tr-reasoning40k-4bit) | 93.64 | 95.45 | 97.27 | 65.45 | 77.27 | 82.73 | 71.82 | 92.73 | 75.45 | 95.45 | **84.73** |
| [unsloth/gemma-3-27b-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-27b-it-unsloth-bnb-4bit) | 86.36 | 71.82 | 96.36 | 59.09 | 81.82 | 76.36 | 66.36 | 93.64 | 69.09 | 99.09 | **80.00** |
| [TURKCELL/Turkcell-LLM-7b-v1](https://huggingface.co/TURKCELL/Turkcell-LLM-7b-v1)| 50.91 | 49.09 | 31.82 | 12.73 | 43.73 | 14.55 | 15.45 | 20.00 | 0.91 | 75.45 | **31.36** |
| [google/gemini-1.5-flash](https://ai.google.dev/gemini-api/docs/models?hl=en#model-versions) | 100.00 | 90.91 | 100.00 | 77.27 | 100.00 | 63.64 | 71.82 | 92.73 | 85.45 | 100.00 | **88.18** |
| [google/gemini-2.0-flash-lite](https://ai.google.dev/gemini-api/docs/models?hl=en#model-versions) | 95.45 | 100.00 | 100.00 | 79.09 | 100.00 | 85.45 | 80.91 | 92.73 | 90.91 | 97.27 | **92.18** |
| [Trendyol/Trendyol-LLM-7B-chat-v4.1.0](https://huggingface.co/Trendyol/Trendyol-LLM-7B-chat-v4.1.0) | 84.55 | 71.82 | 68.18 | 54.55 | 70.91 | 60.00 | 46.36 | 80.00 | 46.36 | 81.82 | **66.46** |
| [Openai/gpt-4o-mini-2024-07-18](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) | 93.64 | 87.27 | 100.00 | 75.45 | 82.73 | 75.45 | 71.82 | 92.73 | 76.36 | 100.00 | **85.55** |
| [Openai/o3-mini-2025-01-31](https://openai.com/index/openai-o3-mini/) | 100.00 | 93.64 | 100.00 | 92.73 | 100.00 | 100.00 | 85.45 | 88.18 | 100.00 | 100.00 | **96.00** |
| [neuralwork/gemma-2-9b-it-tr](https://huggingface.co/neuralwork/gemma-2-9b-it-tr) | 94.55 | 81.82 | 91.82 | 91.82 | 79.09 | 58.18 | 46.36 | 61.82 | 49.09 | 96.36 | **75.09** |
| [Openai/gpt-4.1-nano-2025-04-14](https://openai.com/index/gpt-4-1/) | 100.00 | 95.45 | 82.73 | 91.82 | 82.73 | 69.09 | 71.82 | 86.36 | 75.45 | 100.00 | **85.55** |
| [Openai/gpt-4o-2024-08-06](https://openai.com/index/gpt-4o-system-card/) | 89.09 | 80.91 | 90.91 | 91.82 | 91.82 | 92.73 | 71.82 | 92.73 | 70.00 | 100.00 | **87.18** |
| [Openai/gpt-4.1-mini-2025-04-14](https://openai.com/index/gpt-4-1/) | 100.00 | 100.00 | 100.00 | 92.73 | 91.82 | 100.00 | 84.55 | 100.00 | 100.00 | 100.00 | **96.91** | |
shehryaraijaz/m2m100-legal-translation-en-ur | shehryaraijaz | 2025-04-16T21:33:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/m2m100_418M",
"base_model:finetune:facebook/m2m100_418M",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-16T16:29:23Z | ---
library_name: transformers
license: mit
base_model: facebook/m2m100_418M
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100-legal-translation-en-ur
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100-legal-translation-en-ur
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2290
- Bleu: 30.9872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 8.278 | 0.2688 | 25 | 6.3522 | 3.5831 |
| 5.3323 | 0.5376 | 50 | 4.9369 | 9.8940 |
| 4.3888 | 0.8065 | 75 | 3.7497 | 12.2129 |
| 3.0058 | 1.0753 | 100 | 2.6821 | 15.0216 |
| 2.2164 | 1.3441 | 125 | 1.7722 | 16.6899 |
| 1.2595 | 1.6129 | 150 | 1.0978 | 18.5312 |
| 0.8531 | 1.8817 | 175 | 0.7009 | 19.6160 |
| 0.579 | 2.1505 | 200 | 0.4962 | 21.4995 |
| 0.4599 | 2.4194 | 225 | 0.4073 | 22.5162 |
| 0.3362 | 2.6882 | 250 | 0.3502 | 23.5743 |
| 0.353 | 2.9570 | 275 | 0.3122 | 25.6336 |
| 0.313 | 3.2258 | 300 | 0.2887 | 27.3754 |
| 0.3446 | 3.4946 | 325 | 0.2702 | 28.0800 |
| 0.3345 | 3.7634 | 350 | 0.2541 | 29.5058 |
| 0.2018 | 4.0323 | 375 | 0.2439 | 30.0331 |
| 0.2381 | 4.3011 | 400 | 0.2360 | 30.5859 |
| 0.1609 | 4.5699 | 425 | 0.2315 | 30.8617 |
| 0.2169 | 4.8387 | 450 | 0.2290 | 30.9872 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
Subsets and Splits