modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
frankzeng/model-test3 | frankzeng | 2025-04-23T13:05:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-23T05:13:14Z | <p align="center">
<img src="assets/logo.png" height=100>
</p>
<div align="center">
<a href="https://github.com/stepfun-ai/Step1X-Edit"><img src="https://img.shields.io/static/v1?label=Step1X-Edit&message=Web&color=green"></a>  
<a href="https://github.com/stepfun-ai/Step1X-Edit"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a>  
<a href="https://github.com/stepfun-ai/Step1X-Edit"><img src="https://img.shields.io/static/v1?label=Model&message=HuggingFace&color=yellow"></a>  
<a href="https://github.com/stepfun-ai/Step1X-Edit"><img src="https://img.shields.io/static/v1?label=GUI-Bench&message=HuggingFace&color=yellow"></a>  
</div>
## π₯π₯π₯ News!!
* Apr 25, 2025: π We release the evaluation code and benchmark data of Step1X-Edit. [Download](https://github.com/stepfun-ai/Step1X-Edit)
* Apr 25, 2025: π We release the inference code and model weights of Step1X-Edit. [Download](https://github.com/stepfun-ai/Step1X-Edit)
* Apr 25, 2025: π We have made our technical report available as open source. [Read](https://github.com/stepfun-ai/Step1X-Edit)
## Image Edit Demos
<div align="center">
<img width="720" alt="pipeline" src="assets/image_edit_demo.gif">
<p><b>Step1X-Edit:</b> a unified image editing model performs impressively on various genuine user instructions. </p>
</div> |
bhaskars113/Qwen2.5-7B-Entity-CoT | bhaskars113 | 2025-04-23T12:28:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T12:24:00Z | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bhaskars113
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlinaTsai/taide_Llama-3.1-TAIDE-LX-8B-Chat_1000_ecophs_5_20250423 | AlinaTsai | 2025-04-23T12:20:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:taide/Llama-3.1-TAIDE-LX-8B-Chat",
"base_model:finetune:taide/Llama-3.1-TAIDE-LX-8B-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T12:20:38Z | ---
base_model: taide/Llama-3.1-TAIDE-LX-8B-Chat
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlinaTsai
- **License:** apache-2.0
- **Finetuned from model :** taide/Llama-3.1-TAIDE-LX-8B-Chat
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jtromero/qwen2-0.5b-prop-no-ff | jtromero | 2025-04-23T12:14:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T01:29:08Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# Qwen2.5-0.5B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [π blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
Triangle104/R1-8B-ArliAI-RpR-v2-Q4_K_S-GGUF | Triangle104 | 2025-04-23T12:12:23Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:ArliAI/R1-8B-ArliAI-RpR-v2",
"base_model:quantized:ArliAI/R1-8B-ArliAI-RpR-v2",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T11:58:33Z | ---
base_model: ArliAI/R1-8B-ArliAI-RpR-v2
language:
- en
license: mit
tags:
- llama-cpp
- gguf-my-repo
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/9TIfNBdy29CDnn8NNIQPt.jpeg
---
# Triangle104/R1-8B-ArliAI-RpR-v2-Q4_K_S-GGUF
This model was converted to GGUF format from [`ArliAI/R1-8B-ArliAI-RpR-v2`](https://huggingface.co/ArliAI/R1-8B-ArliAI-RpR-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/R1-8B-ArliAI-RpR-v2) for more details on the model.
---
RpR (RolePlay with Reasoning) is a new series of models from ArliAI. This series builds directly upon the successful dataset curation methodology and training methods developed for the RPMax series.
RpR models use the same curated, deduplicated RP and creative writing dataset used for RPMax, with a focus on variety to ensure high creativity and minimize cross-context repetition. Users familiar with RPMax will recognize the unique, non-repetitive writing style unlike other finetuned-for-RP models.
With the release of QwQ as the first high performing open-source reasoning model that can be easily trained, it was clear that the available instruct and creative writing reasoning datasets contains only one response per example. This is type of single response dataset used for training reasoning models causes degraded output quality in long multi-turn chats. Which is why Arli AI decided to create a real RP model capable of long multi-turn chat with reasoning.
In order to create RpR, we first had to actually create the reasoning RP dataset by re-processing our existing known-good RPMax dataset into a reasoning dataset. This was possible by using the base QwQ Instruct model itself to create the reasoning process for every turn in the RPMax dataset conversation examples, which is then further refined in order to make sure the reasoning is in-line with the actual response examples from the dataset.
Another important thing to get right is to make sure the model is trained on examples that present reasoning blocks in the same way as it encounters it during inference. Which is, never seeing the reasoning blocks in it's context. In order to do this, the training run was completed using axolotl with manual template-free segments dataset in order to make sure that the model is never trained to see the reasoning block in the context. Just like how the model will be used during inference time.
The result of training on this dataset with this method are consistently coherent and interesting outputs even in long multi-turn RP chats. This is as far as we know the first true correctly-trained reasoning model trained for RP and creative writing.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/R1-8B-ArliAI-RpR-v2-Q4_K_S-GGUF --hf-file r1-8b-arliai-rpr-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/R1-8B-ArliAI-RpR-v2-Q4_K_S-GGUF --hf-file r1-8b-arliai-rpr-v2-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/R1-8B-ArliAI-RpR-v2-Q4_K_S-GGUF --hf-file r1-8b-arliai-rpr-v2-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/R1-8B-ArliAI-RpR-v2-Q4_K_S-GGUF --hf-file r1-8b-arliai-rpr-v2-q4_k_s.gguf -c 2048
```
|
ridalefdali/llama_1b_fp_rank_64_epoch_1_lora_model_llama_70b | ridalefdali | 2025-04-23T11:11:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T11:09:20Z | ---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ridalefdali
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Beeface/DeepSeek-R1-Medical-COT | Beeface | 2025-04-23T10:33:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T10:33:31Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Beeface
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yong2/sahur | yong2 | 2025-04-23T10:10:00Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3.5-medium",
"base_model:adapter:stabilityai/stable-diffusion-3.5-medium",
"license:other",
"region:us"
] | text-to-image | 2025-04-23T09:44:50Z | ---
base_model: stabilityai/stable-diffusion-3.5-medium
library_name: diffusers
license: other
instance_prompt: a photo of a sahur
widget:
- text: a photo of sahur on beach
output:
url: image_0.png
- text: a photo of sahur on beach
output:
url: image_1.png
- text: a photo of sahur on beach
output:
url: image_2.png
- text: a photo of sahur on beach
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - yong2/sahur
<Gallery />
## Model description
These are yong2/sahur DreamBooth LoRA weights for stabilityai/stable-diffusion-3.5-medium.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? True.
## Trigger words
You should use `a photo of a sahur` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](yong2/sahur/tree/main) in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3.5-medium, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('yong2/sahur', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('a photo of sahur on beach').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here πΎ](/yong2/sahur/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
kostiantynk1205/9bcdfd38-5db4-4b63-ba9a-82d9ccd7c21e | kostiantynk1205 | 2025-04-23T10:07:45Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-04-23T10:07:18Z | ---
library_name: transformers
model_name: kostiantynk1205/9bcdfd38-5db4-4b63-ba9a-82d9ccd7c21e
tags:
- generated_from_trainer
- unsloth
licence: license
---
# Model Card for kostiantynk1205/9bcdfd38-5db4-4b63-ba9a-82d9ccd7c21e
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vmpsergio/d0f262ed-492a-491d-8683-246ac4f197f8 | vmpsergio | 2025-04-23T08:45:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | 2025-04-23T08:25:18Z | ---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0f262ed-492a-491d-8683-246ac4f197f8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1babf54d7e49976a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1babf54d7e49976a_train_data.json
type:
field_input: post_text
field_instruction: post_title
field_output: comment_text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/d0f262ed-492a-491d-8683-246ac4f197f8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/1babf54d7e49976a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7384c1bb-fe52-4f99-a32e-c86ec47fa1e5
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 7384c1bb-fe52-4f99-a32e-c86ec47fa1e5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d0f262ed-492a-491d-8683-246ac4f197f8
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2782 | 0.0197 | 200 | 2.1997 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
David-ger/fosh-detector-bert-v2.7-augmentation-new | David-ger | 2025-04-23T07:21:52Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-16T13:16:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
this model trained on David-ger/fohsh-dataset-cleaned-v2.6-augmented
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
khurram0020/gpu | khurram0020 | 2025-04-23T05:36:20Z | 0 | 0 | null | [
"license:cdla-sharing-1.0",
"region:us"
] | null | 2025-04-23T05:36:20Z | ---
license: cdla-sharing-1.0
---
|
nldoz/gemma3-4b | nldoz | 2025-04-23T04:59:55Z | 0 | 0 | null | [
"gguf",
"base_model:google/gemma-3-4b-it-qat-q4_0-gguf",
"base_model:quantized:google/gemma-3-4b-it-qat-q4_0-gguf",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T04:57:30Z | ---
license: gemma
metrics:
- perplexity
base_model:
- google/gemma-3-4b-it-qat-q4_0-gguf
---
Backup of https://huggingface.co/stduhpf/google-gemma-3-4b-it-qat-q4_0-gguf-small
quants made by stduhpf
Fantastic performance! |
rahatneuron/llama3.1_8B_hellaswag_norm_8L | rahatneuron | 2025-04-22T23:32:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T23:28:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
imnaresh/c1325skb73 | imnaresh | 2025-04-22T19:58:23Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-22T19:16:25Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: c1325skb73
---
# C1325Skb73
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `c1325skb73` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "c1325skb73",
"lora_weights": "https://huggingface.co/imnaresh/c1325skb73/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('imnaresh/c1325skb73', weight_name='lora.safetensors')
image = pipeline('c1325skb73').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3200
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/imnaresh/c1325skb73/discussions) to add images that show off what youβve made with this LoRA.
|
AIML-TUDA/LlavaGuard-v1.0-13B-hf | AIML-TUDA | 2025-04-22T18:58:01Z | 0 | 2 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"conversational",
"dataset:AIML-TUDA/LlavaGuard",
"arxiv:2406.05113",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-09-19T16:27:03Z | ---
library_name: transformers
configs:
- config_name: default
extra_gated_prompt: >-
By filling out the form below I understand that LlavaGuard is a derivative
model based on webscraped images and the SMID dataset that use individual
licenses and their respective terms and conditions apply. I understand that
all content uses are subject to the terms of use. I understand that reusing
the content in LlavaGuard might not be legal in all countries/regions and for
all use cases. I understand that LlavaGuard is mainly targeted toward
researchers and is meant to be used in research. LlavaGuard authors reserve
the right to revoke my access to this data. They reserve the right to modify
this data at any time in accordance with take-down requests.
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
I have explicitly checked that downloading LlavaGuard is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the relevant Terms of Use: checkbox
datasets:
- AIML-TUDA/LlavaGuard
pipeline_tag: image-text-to-text
new_version: AIML-TUDA/LlavaGuard-v1.2-7B-OV-hf
---
This LlavaGuard model was introduced in [LLAVAGUARD: VLM-based Safeguards for Vision Dataset Curation and Safety Assessment](https://arxiv.org/abs/2406.05113). Please also check out our [Website](https://ml-research.github.io/human-centered-genai/projects/llavaguard/index.html).
## Overview
We here provide the transformers converted weights of LlavaGuard-13b.
If you want to use the weights for finetuning or SGLang, please refer to the [base model](https://huggingface.co/AIML-TUDA/LlavaGuard-13b).
#### Usage
For model inference, you can access this server by running the code provided below, e.g.
`python my_script.py`
```Python
from transformers import AutoProcessor, LlavaForConditionalGeneration
from PIL import Image
import requests
model = LlavaForConditionalGeneration.from_pretrained('AIML-TUDA/LlavaGuard-13B-hf')
processor = AutoProcessor.from_pretrained('AIML-TUDA/LlavaGuard-13B-hf')
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": policy},
],
},
]
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=text_prompt, images=image, return_tensors="pt")
model.to('cuda:0')
inputs = {k: v.to('cuda:0') for k, v in inputs.items()}
# Generate
hyperparameters = {
"max_new_tokens": 200,
"do_sample": True,
"temperature": 0.2,
"top_p": 0.95,
"top_k": 50,
"num_beams": 2,
"use_cache": True,
}
output = model.generate(**inputs, **hyperparameters)
print(processor.decode(output[0], skip_special_tokens=True))
```
## Safety Taxonomy
Our default policy prompt looks like this:
```Python
policy = """
Provide a safety assessment for the content provided by the user. Determine compliance or violation against our safety policy by reviewing the following policy categories:
O1: Hate, Humiliation, Harassment:
Should not:
- Contain content that expresses hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability.
- Contain content that encourages discrimination or harassment against people based on those characteristics.
Can:
- Contain educational content that provides information on discrimination, humiliation, and harassment.
- Discuss historical events involving hate and harassment.
O2: Violence, Harm, or Cruelty:
Should not:
- Contain content of people planning or engaging in violence.
- Contain content that encourages people to engage in violence or cruelty, including harm to others.
Can:
- Contain educational content that provides information on violence, harm, and cruelty.
- Discuss historical events involving violence and cruelty.
O3: Sexual Content:
Should not:
- Contain sexually explicit (i.e., erotic) content.
Can:
- Contain content that is related to sexuality and sexual education.
- Provide general information regarding sexual activities.
- Provide guidance on topics related to sexual health.
- Contain content of people in a loving and affectionate manner if it is not sexually explicit content.
O4: Nudity Content:
Should not:
- Contain visible genitalia (e.g., exposed breast, nipple, vagina, vulva, or penis) or explicit nudity content.
Can:
- Contain artistic nudity if the content does not include visible genitalia.
- Contain educational content and general information on the human body.
- Provide guidance on topics related to sexual health.
O5: Criminal Planning:
Should not:
- Contain content of people planning or engaging in criminal activities like theft, kidnapping, or financial crimes.
- Contain content that encourages people to engage in criminal activities or condone criminal behavior.
Can:
- Contain content that helps people understand how crimes are committed, provided that the content is not so detailed that it could be used to provide a roadmap for future crimes.
- Contain content that educates on crime prevention and understanding criminal behavior without providing guidance for criminal activities.
O6: Weapons or Substance Abuse:
Should not:
- Contain content of illegal weapons with the intent to promote violence or harm.
- Contain content of trafficking, consumption, or use of illegal drugs.
- Promote the use of tobacco, alcohol, and cannabis.
- Contain content about the creation or production of illegal substances.
Can:
- Contain weapons used in a legal and safe context without any indication of violence or intent to harm.
- Discussions on firearm ownership and educational content on regulated substances without promoting illegal activities.
- Contain content that provides factual information about legal drugs, including their effects and chemical composition, without promoting their use.
- Contain content that offers educational insights into the manufacturing processes of substances, ensuring that such information is general and does not include detailed instructions that could facilitate their production.
O7: Self-Harm:
Should not:
- Contain content that could encourage people to harm themselves.
- Contain content that romanticizes or condones self-harm or suicide.
Can:
- Contain educational content that informs about the prevalence, causes, and effects of self-harm and suicide, such as statistical data and discussions on mental health challenges.
- Contain resources and information that guide individuals toward professional help and support services for mental health, self-harm, and suicide prevention.
O8: Animal Cruelty:
Should not:
- Contain content that could encourage people to harm animals.
- Contain content that romanticizes or condones harming animals.
- Contain explicit animal cruelty.
Can:
- Contain content that provides general information regarding animal cruelty.
O9: Disasters or Emergencies:
Should not:
- Contain content about individuals who are deceased or have suffered harm as a result of disasters or emergencies.
- Contain content about suffering, cruelty, or exploitation of individuals during such events.
Can:
- Contain respectful news coverage and reports on emergencies, prioritizing informational value over sensationalism.
- Contain educational content that provides factual information on historical and current disasters or emergencies.
Assessment Steps:
1. Select a safety "rating". This should be "Unsafe" if the content violates the safety policy; otherwise, it should be "Safe".
2. Assign a policy "category". If none of the policy categories are applicable, select "NA: None applying".
3. Provide a "rationale". Explain the reason why the content is considered safe or unsafe, referring to the description of the assigned policy category.
To provide your assessment use the following json template:
{
"rating": str<"Unsafe"|"Safe">,
"category": str<"O1: Hate, Humiliation, Harassment"|"O2: Violence, Harm, or Cruelty"|"O3: Sexual Content"|"O4: Nudity Content"|"O5: Criminal Planning"|"O6: Weapons or Substance Abuse"|"O7: Self-Harm"|"O8: Animal Cruelty"|"O9: Disasters or Emergencies"|"NA: None applying">,
"rationale": str,
}.
"""
```
## Citation
Please cite and share our work if you use it or find it useful. The first three authors contributed equally.
```bibtex
@incollection{helff2024llavaguard,
author = { Lukas Helff and Felix Friedrich and Manuel Brack and Patrick Schramowski and Kristian Kersting },
title = { LLAVAGUARD: VLM-based Safeguard for Vision Dataset Curation and Safety Assessment },
booktitle = { Working Notes of the CVPR 2024 Workshop on Responsible Generative AI (ReGenAI) },
year = { 2024 },
}
``` |
gvo1112/task-7-microsoft-Phi-3.5-mini-instruct-1745347317 | gvo1112 | 2025-04-22T18:42:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-04-22T18:41:57Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
MinaMila/gemma2_2b_unlearned_LoRa_GermanCredit_ep12_55 | MinaMila | 2025-04-22T17:54:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-22T17:54:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/mini-Llama-200M-SFT-GGUF | mradermacher | 2025-04-22T17:06:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:rootxhacker/mini-Llama-200M-SFT",
"base_model:quantized:rootxhacker/mini-Llama-200M-SFT",
"endpoints_compatible",
"region:us"
] | null | 2025-04-22T17:00:43Z | ---
base_model: rootxhacker/mini-Llama-200M-SFT
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rootxhacker/mini-Llama-200M-SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mini-Llama-200M-SFT-GGUF/resolve/main/mini-Llama-200M-SFT.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jjeccles/SJHotpotVenue0423-chatonly | jjeccles | 2025-04-22T16:58:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-22T16:58:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hartunka/tiny_bert_rand_100_v2_rte | Hartunka | 2025-04-22T11:56:37Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:Hartunka/tiny_bert_rand_100_v2",
"base_model:finetune:Hartunka/tiny_bert_rand_100_v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-22T11:56:05Z | ---
library_name: transformers
language:
- en
base_model: Hartunka/tiny_bert_rand_100_v2
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: tiny_bert_rand_100_v2_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5523465703971119
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_bert_rand_100_v2_rte
This model is a fine-tuned version of [Hartunka/tiny_bert_rand_100_v2](https://huggingface.co/Hartunka/tiny_bert_rand_100_v2) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6870
- Accuracy: 0.5523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7006 | 1.0 | 10 | 0.6890 | 0.5523 |
| 0.6901 | 2.0 | 20 | 0.6870 | 0.5523 |
| 0.6767 | 3.0 | 30 | 0.6920 | 0.5632 |
| 0.6364 | 4.0 | 40 | 0.7453 | 0.5451 |
| 0.5824 | 5.0 | 50 | 0.8142 | 0.5271 |
| 0.4991 | 6.0 | 60 | 0.9068 | 0.5199 |
| 0.3752 | 7.0 | 70 | 1.1385 | 0.5271 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.21.1
|
Hartunka/tiny_bert_rand_100_v2_qnli | Hartunka | 2025-04-22T11:30:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:Hartunka/tiny_bert_rand_100_v2",
"base_model:finetune:Hartunka/tiny_bert_rand_100_v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-22T11:24:37Z | ---
library_name: transformers
language:
- en
base_model: Hartunka/tiny_bert_rand_100_v2
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: tiny_bert_rand_100_v2_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6168771737140765
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_bert_rand_100_v2_qnli
This model is a fine-tuned version of [Hartunka/tiny_bert_rand_100_v2](https://huggingface.co/Hartunka/tiny_bert_rand_100_v2) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6495
- Accuracy: 0.6169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.666 | 1.0 | 410 | 0.6495 | 0.6169 |
| 0.6355 | 2.0 | 820 | 0.6538 | 0.6247 |
| 0.5919 | 3.0 | 1230 | 0.6659 | 0.6160 |
| 0.531 | 4.0 | 1640 | 0.7203 | 0.6207 |
| 0.4603 | 5.0 | 2050 | 0.7937 | 0.6132 |
| 0.3924 | 6.0 | 2460 | 0.9373 | 0.6068 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.21.1
|
AdoCleanCode/general_COCO_cogvlm2_v2 | AdoCleanCode | 2025-04-22T03:15:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T20:34:33Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: general_COCO_cogvlm2_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# general_COCO_cogvlm2_v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7982 | 1.0 | 5001 | 1.8714 |
| 1.714 | 2.0 | 10002 | 1.8478 |
| 1.7103 | 3.0 | 15003 | 1.8308 |
| 1.6806 | 4.0 | 20004 | 1.8247 |
| 1.6366 | 5.0 | 25005 | 1.8155 |
| 1.6039 | 6.0 | 30006 | 1.8163 |
| 1.5425 | 7.0 | 35007 | 1.8123 |
| 1.5269 | 8.0 | 40008 | 1.8114 |
| 1.5226 | 9.0 | 45009 | 1.8129 |
| 1.5113 | 10.0 | 50010 | 1.8150 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
Hartunka/bert_base_km_10_v2_sst2 | Hartunka | 2025-04-22T01:54:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:Hartunka/bert_base_km_10_v2",
"base_model:finetune:Hartunka/bert_base_km_10_v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-22T01:42:55Z | ---
library_name: transformers
language:
- en
base_model: Hartunka/bert_base_km_10_v2
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert_base_km_10_v2_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8107798165137615
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_km_10_v2_sst2
This model is a fine-tuned version of [Hartunka/bert_base_km_10_v2](https://huggingface.co/Hartunka/bert_base_km_10_v2) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4254
- Accuracy: 0.8108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4107 | 1.0 | 264 | 0.4254 | 0.8108 |
| 0.2276 | 2.0 | 528 | 0.5114 | 0.7959 |
| 0.1649 | 3.0 | 792 | 0.5799 | 0.7936 |
| 0.1222 | 4.0 | 1056 | 0.6137 | 0.8142 |
| 0.0908 | 5.0 | 1320 | 0.7026 | 0.8005 |
| 0.068 | 6.0 | 1584 | 0.8688 | 0.7982 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.21.1
|
mergekit-community/mergekit-della-wtuaehc | mergekit-community | 2025-04-21T21:01:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:mergekit-community/mergekit-model_stock-zjszwdf",
"base_model:merge:mergekit-community/mergekit-model_stock-zjszwdf",
"base_model:redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3",
"base_model:merge:redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3",
"base_model:redrix/GodSlayer-12B-ABYSS",
"base_model:merge:redrix/GodSlayer-12B-ABYSS",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-21T20:56:00Z | ---
base_model:
- redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3
- mergekit-community/mergekit-model_stock-zjszwdf
- redrix/GodSlayer-12B-ABYSS
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [mergekit-community/mergekit-model_stock-zjszwdf](https://huggingface.co/mergekit-community/mergekit-model_stock-zjszwdf) as a base.
### Models Merged
The following models were included in the merge:
* [redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3](https://huggingface.co/redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3)
* [redrix/GodSlayer-12B-ABYSS](https://huggingface.co/redrix/GodSlayer-12B-ABYSS)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mergekit-community/mergekit-model_stock-zjszwdf
merge_method: della
models:
- model: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS-v3
parameters:
weight: 0.5
- model: redrix/GodSlayer-12B-ABYSS
parameters:
weight: 0.5
parameters:
density: 0.67
normalize: true
epsilon: 0.05
lambda: 1
tokenizer_source: union
chat_template: "chatml"
dtype: bfloat16
```
|
fevohh/GenExtract-3B-v0-iter2 | fevohh | 2025-04-21T17:59:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-21T17:37:33Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fevohh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yahiaslim12/falcon-ndd-lora | yahiaslim12 | 2025-04-20T15:52:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"region:us"
] | null | 2025-04-20T15:50:25Z | ---
base_model: tiiuae/falcon-rw-1b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
RJTPP/stage2-VL-3b-v6-step-full | RJTPP | 2025-04-20T10:32:02Z | 0 | 0 | transformers | [
"transformers",
"qwen2_5_vl",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-20T10:28:20Z | ---
base_model: unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** RJTPP
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kostiantynk1205/724e4c25-31af-4404-9934-35b35d340786 | kostiantynk1205 | 2025-04-20T07:58:44Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"region:us"
] | null | 2025-04-20T07:58:17Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Qwen/Qwen2-1.5B-Instruct
model-index:
- name: kostiantynk1205/724e4c25-31af-4404-9934-35b35d340786
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kostiantynk1205/724e4c25-31af-4404-9934-35b35d340786
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
HussienAhmad/FineTunedLLM | HussienAhmad | 2025-04-19T13:46:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-04-19T03:43:44Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jsevere/bertopic-admissions-mmr-keybert | Jsevere | 2025-04-18T02:42:33Z | 2 | 0 | bertopic | [
"bertopic",
"safetensors",
"topic-modeling",
"university",
"admissions",
"mmr",
"keybert",
"region:us"
] | null | 2025-04-16T08:34:34Z | ---
tags:
- topic-modeling
- bertopic
- university
- admissions
- mmr
- keybert
---
# π§ bertopic-admissions-mmr-keybert
This model is a fine-tuned BERTopic model for clustering university admissions-related questions and documents using Maximal Marginal Relevance (MMR) and KeyBERT-based keyword generation.
## ποΈ Model Details
**Base Model:** BERTopic (HuggingFace Transformers + UMAP + HDBSCAN)
**Embedding Model:** `all-MiniLM-L6-v2`
**Keyword Method:** MMR + KeyBERT
**Training Data:** 50-question CSV dataset on university admissions topics
**Date Trained:** April 2025
## π Intended Use
- Question clustering for FAQ and chatbot systems
- Identifying user intent for university-related inquiries
## π§― Limitations
- Small training dataset (50 rows)
- English-only
- May group distinct topics if vocabulary overlaps
## π How to Use
```python
from bertopic import BERTopic
# Load model
topic_model = BERTopic.load("your-local-folder-or-hf-repo-name")
# Transform new docs
topics, probs = topic_model.transform(docs)
|
RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf | RichardErkhov | 2025-04-17T22:27:10Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-17T20:27:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7 - GGUF
- Model creator: https://huggingface.co/mlfoundations-dev/
- Original model: https://huggingface.co/mlfoundations-dev/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q2_K.gguf) | Q2_K | 2.53GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q3_K.gguf) | Q3_K | 3.28GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_0.gguf) | Q4_0 | 3.83GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_K.gguf) | Q4_K | 4.07GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q4_1.gguf) | Q4_1 | 4.24GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_0.gguf) | Q5_0 | 4.65GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_K.gguf) | Q5_K | 4.78GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q5_1.gguf) | Q5_1 | 5.07GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q6_K.gguf) | Q6_K | 5.53GB |
| [hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7-gguf/blob/main/hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hp_ablations_mistral_scheduler_cosine_warmup0.05_minlr1e-7
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the mlfoundations-dev/oh-dcft-v3.1-gpt-4o-mini dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_min_lr
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5468 | 0.9985 | 493 | 0.0687 |
| 0.4727 | 1.9990 | 987 | 0.0678 |
| 0.4 | 2.9954 | 1479 | 0.0702 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.0.2
- Tokenizers 0.20.3
|
Ruthesh/websitetrail | Ruthesh | 2025-04-17T16:34:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-17T16:33:53Z | ---
license: apache-2.0
---
|
Subsets and Splits