modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-03 12:30:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 466
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-03 12:30:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Astralnik/EVA-0.2-RuadaptQwen2.5-14B-Instruct-GGUF | Astralnik | 2025-06-03T02:53:24Z | 0 | 0 | null | [
"gguf",
"roleplay",
"rp",
"character",
"ru",
"base_model:EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2",
"base_model:merge:EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2",
"base_model:RefalMachine/RuadaptQwen2.5-14B-Instruct",
"base_model:merge:RefalMachine/RuadaptQwen2.5-14B-Instruct",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-02T11:03:40Z | ---
license: gpl-3.0
language:
- ru
base_model:
- EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
- RefalMachine/RuadaptQwen2.5-14B-Instruct
tags:
- roleplay
- rp
- character
base_model_relation: merge
---
# О модели
**EVA-0.2-RuadaptQwen2.5-14B-Instruct-GGUF** - квантованная версия модели, предназначенной для ролевых игр на русском языке. Токенайзер адаптирован под русский язык, что позволяет получить повышение производительности около 60% (подробное описание есть в репозитории оригинала от [RefalMachine](https://huggingface.co/RefalMachine/RuadaptQwen2.5-14B-Instruct)).
**ВАЖНО!** Насколько я знаю, у модели НЕТ ограничений связанных с NSFW-контентом, поэтому будьте осторожны. ~~(да ты эту строчку и искал)~~
# Рекомендованные настройки:
- chat_format="chatml"
- repetition_penalty=1.1 (repeat_penalty?)
- temperature: 0.8
- min-p: 0.05
- top-a: 0.3
*взяты из оригинальных моделей и могут быть вообще не подходящими, но, вроде, они +- работают.
# Метод создания
Эта модель создана через слияние [EVA-Qwen2.5-14B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2) и [RuadaptQwen2.5-14B-Instruct](https://huggingface.co/RefalMachine/RuadaptQwen2.5-14B-Instruct) через mergekit методом task_arithmetic.
# Прочее
В коллекции также имеются версия 1M с увеличенным окном контекста и исходники.
(Если потребуются другие квантованные версии, либо напишите, что нужна конкретная версия, либо можете взять F16 и квантовать самостоятельно через llama-cpp, а мне, простите, провайдер не позволяет такие объёмы данных оперативно перетаскивать, поэтому только две самые оптимальные).
В планах версия на 32B параметров.
|
alperenyildiz/Llama-3.2-1B-Instruct_q8_0_GRPO | alperenyildiz | 2025-06-03T02:51:54Z | 17 | 0 | peft | [
"peft",
"safetensors",
"gguf",
"llama",
"trl",
"grpo",
"GRPO",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-26T14:50:30Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
library_name: peft
tags:
- trl
- grpo
- GRPO
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF | featherless-ai-quants | 2025-06-03T02:51:24Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:NeverSleep/Noromaid-13b-v0.3",
"base_model:quantized:NeverSleep/Noromaid-13b-v0.3",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T02:39:25Z | ---
base_model: NeverSleep/Noromaid-13b-v0.3
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# NeverSleep/Noromaid-13b-v0.3 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [NeverSleep-Noromaid-13b-v0.3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-IQ4_XS.gguf) | 6694.33 MB |
| Q2_K | [NeverSleep-Noromaid-13b-v0.3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q2_K.gguf) | 4629.39 MB |
| Q3_K_L | [NeverSleep-Noromaid-13b-v0.3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q3_K_L.gguf) | 6608.54 MB |
| Q3_K_M | [NeverSleep-Noromaid-13b-v0.3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q3_K_M.gguf) | 6044.17 MB |
| Q3_K_S | [NeverSleep-Noromaid-13b-v0.3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q3_K_S.gguf) | 5396.82 MB |
| Q4_K_M | [NeverSleep-Noromaid-13b-v0.3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q4_K_M.gguf) | 7501.56 MB |
| Q4_K_S | [NeverSleep-Noromaid-13b-v0.3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q4_K_S.gguf) | 7079.30 MB |
| Q5_K_M | [NeverSleep-Noromaid-13b-v0.3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q5_K_M.gguf) | 8802.34 MB |
| Q5_K_S | [NeverSleep-Noromaid-13b-v0.3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q5_K_S.gguf) | 8556.64 MB |
| Q6_K | [NeverSleep-Noromaid-13b-v0.3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q6_K.gguf) | 10184.42 MB |
| Q8_0 | [NeverSleep-Noromaid-13b-v0.3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/NeverSleep-Noromaid-13b-v0.3-GGUF/blob/main/NeverSleep-Noromaid-13b-v0.3-Q8_0.gguf) | 13190.57 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
stewy33/0524_rowan_original_prompt_augmented_pkc_fda_approval-c1edfc44 | stewy33 | 2025-06-03T02:51:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-03T02:48:25Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
KBhandari11/vicuna_channel_2_global_facts_All | KBhandari11 | 2025-06-03T02:50:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"model: vicuna",
"repo_name: vicuna_channel_2_global_facts_All",
"file_name: vicuna_channel_2_global_facts_All_5000_5.pt",
"base_model: lmsys/vicuna-7b-v1.5",
"pruning_style: channel",
"community: 2",
"pruning_ratio: 20",
"dataset_label: global_facts",
"sparsity_ratio: 20",
"dataset: ['tasksource/mmlu', 'global_facts']",
"finetune: All",
"modules_size: 54",
"modules: ['10_attn.o', '10_attn.q', '11_attn.o', '11_attn.q', '11_mlp.down', '11_mlp.up', '12_attn.o', '12_attn.q', '12_gate', '12_mlp.down', '13_gate', '13_mlp.down', '13_mlp.up', '14_attn.k', '14_attn.o', '14_attn.v', '14_gate', '15_gate', '16_mlp.up', '17_mlp.down', '17_mlp.up', '18_attn.k', '18_attn.o', '18_gate', '19_attn.k', '19_attn.q', '20_attn.k', '20_attn.o', '21_gate', '21_mlp.down', '22_attn.q', '23_attn.v', '23_gate', '24_attn.v', '27_gate', '28_attn.o', '28_gate', '29_attn.k', '29_attn.o', '29_mlp.up', '30_attn.v', '3_attn.v', '3_gate', '4_attn.k', '4_attn.q', '4_mlp.up', '5_attn.k', '5_attn.v', '6_mlp.up', '8_attn.v', '8_mlp.up', '9_attn.q', '9_attn.v', '9_mlp.down']",
"rank: 1",
"tags: ['model: vicuna', 'repo_name: vicuna_channel_2_global_facts_All', 'file_name: vicuna_channel_2_global_facts_All_5000_5.pt', 'base_model: lmsys/vicuna-7b-v1.5', 'pruning_style: channel', 'community: 2', 'pruning_ratio: 20', 'dataset_label: global_facts', 'sparsity_ratio: 20', \"dataset: ['tasksource/mmlu', 'global_facts']\", 'finetune: All', 'modules_size: 54', \"modules: ['10_attn.o', '10_attn.q', '11_attn.o', '11_attn.q', '11_mlp.down', '11_mlp.up', '12_attn.o', '12_attn.q', '12_gate', '12_mlp.down', '13_gate', '13_mlp.down', '13_mlp.up', '14_attn.k', '14_attn.o', '14_attn.v', '14_gate', '15_gate', '16_mlp.up', '17_mlp.down', '17_mlp.up', '18_attn.k', '18_attn.o', '18_gate', '19_attn.k', '19_attn.q', '20_attn.k', '20_attn.o', '21_gate', '21_mlp.down', '22_attn.q', '23_attn.v', '23_gate', '24_attn.v', '27_gate', '28_attn.o', '28_gate', '29_attn.k', '29_attn.o', '29_mlp.up', '30_attn.v', '3_attn.v', '3_gate', '4_attn.k', '4_attn.q', '4_mlp.up', '5_attn.k', '5_attn.v', '6_mlp.up', '8_attn.v', '8_mlp.up', '9_attn.q', '9_attn.v', '9_mlp.down']\", 'rank: 1']",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T02:45:16Z | ---
library_name: transformers
tags:
- 'model: vicuna'
- 'repo_name: vicuna_channel_2_global_facts_All'
- 'file_name: vicuna_channel_2_global_facts_All_5000_5.pt'
- 'base_model: lmsys/vicuna-7b-v1.5'
- 'pruning_style: channel'
- 'community: 2'
- 'pruning_ratio: 20'
- 'dataset_label: global_facts'
- 'sparsity_ratio: 20'
- 'dataset: [''tasksource/mmlu'', ''global_facts'']'
- 'finetune: All'
- 'modules_size: 54'
- 'modules: [''10_attn.o'', ''10_attn.q'', ''11_attn.o'', ''11_attn.q'', ''11_mlp.down'',
''11_mlp.up'', ''12_attn.o'', ''12_attn.q'', ''12_gate'', ''12_mlp.down'', ''13_gate'',
''13_mlp.down'', ''13_mlp.up'', ''14_attn.k'', ''14_attn.o'', ''14_attn.v'', ''14_gate'',
''15_gate'', ''16_mlp.up'', ''17_mlp.down'', ''17_mlp.up'', ''18_attn.k'', ''18_attn.o'',
''18_gate'', ''19_attn.k'', ''19_attn.q'', ''20_attn.k'', ''20_attn.o'', ''21_gate'',
''21_mlp.down'', ''22_attn.q'', ''23_attn.v'', ''23_gate'', ''24_attn.v'', ''27_gate'',
''28_attn.o'', ''28_gate'', ''29_attn.k'', ''29_attn.o'', ''29_mlp.up'', ''30_attn.v'',
''3_attn.v'', ''3_gate'', ''4_attn.k'', ''4_attn.q'', ''4_mlp.up'', ''5_attn.k'',
''5_attn.v'', ''6_mlp.up'', ''8_attn.v'', ''8_mlp.up'', ''9_attn.q'', ''9_attn.v'',
''9_mlp.down'']'
- 'rank: 1'
- 'tags: [''model: vicuna'', ''repo_name: vicuna_channel_2_global_facts_All'', ''file_name:
vicuna_channel_2_global_facts_All_5000_5.pt'', ''base_model: lmsys/vicuna-7b-v1.5'',
''pruning_style: channel'', ''community: 2'', ''pruning_ratio: 20'', ''dataset_label:
global_facts'', ''sparsity_ratio: 20'', "dataset: [''tasksource/mmlu'', ''global_facts'']",
''finetune: All'', ''modules_size: 54'', "modules: [''10_attn.o'', ''10_attn.q'',
''11_attn.o'', ''11_attn.q'', ''11_mlp.down'', ''11_mlp.up'', ''12_attn.o'', ''12_attn.q'',
''12_gate'', ''12_mlp.down'', ''13_gate'', ''13_mlp.down'', ''13_mlp.up'', ''14_attn.k'',
''14_attn.o'', ''14_attn.v'', ''14_gate'', ''15_gate'', ''16_mlp.up'', ''17_mlp.down'',
''17_mlp.up'', ''18_attn.k'', ''18_attn.o'', ''18_gate'', ''19_attn.k'', ''19_attn.q'',
''20_attn.k'', ''20_attn.o'', ''21_gate'', ''21_mlp.down'', ''22_attn.q'', ''23_attn.v'',
''23_gate'', ''24_attn.v'', ''27_gate'', ''28_attn.o'', ''28_gate'', ''29_attn.k'',
''29_attn.o'', ''29_mlp.up'', ''30_attn.v'', ''3_attn.v'', ''3_gate'', ''4_attn.k'',
''4_attn.q'', ''4_mlp.up'', ''5_attn.k'', ''5_attn.v'', ''6_mlp.up'', ''8_attn.v'',
''8_mlp.up'', ''9_attn.q'', ''9_attn.v'', ''9_mlp.down'']", ''rank: 1'']'
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
unsloth/Mistral-Small-3.1-24B-Base-2503 | unsloth | 2025-06-03T02:48:54Z | 1,653 | 1 | vllm | [
"vllm",
"safetensors",
"mistral3",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"license:apache-2.0",
"region:us"
] | null | 2025-03-18T20:54:59Z | ---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# Model Card for Mistral-Small-3.1-24B-Base-2503
Building upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) **adds state-of-the-art vision understanding** and enhances **long context capabilities up to 128k tokens** without compromising text performance.
With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks.
This model is the base model of [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503).
For enterprises requiring specialized capabilities (increased context, specific modalities, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.
Learn more about Mistral Small 3.1 in our [blog post](https://mistral.ai/news/mistral-small-3-1/).
## Key Features
- **Vision:** Vision capabilities enable the model to analyze images and provide insights based on visual content in addition to text.
- **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, Farshi.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 128k context window.
- **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark Results
When available, we report numbers previously published by other model providers, otherwise we re-evaluate them using our own evaluation harness.
### Pretrain Evals
| Model | MMLU (5-shot) | MMLU Pro (5-shot CoT) | TriviaQA | GPQA Main (5-shot CoT)| MMMU |
|--------------------------------|---------------|-----------------------|------------|-----------------------|-----------|
| **Small 3.1 24B Base** | **81.01%** | **56.03%** | 80.50% | **37.50%** | **59.27%**|
| Gemma 3 27B PT | 78.60% | 52.20% | **81.30%** | 24.30% | 56.10% |
## Usage Examples
### vLLM (recommended)
We recommend using Mistral-Small 3.1 Base with the [vLLM library](https://github.com/vllm-project/vllm).
_Note_ however that this is a pretrained-only checkpoint and thus not ready to work as an instruction model out-of-the-box.
For a production-ready instruction model please use [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503).
**_Installation_**
Make sure you install [`vLLM nightly`](https://github.com/vllm-project/vllm/):
```
pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly --upgrade
```
Doing so should automatically install [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39) followed by a nightly install of vllm as shown above.
**_Example_**
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
from vllm.inputs.data import TokensPrompt
import requests
from PIL import Image
from io import BytesIO
from vllm.multimodal import MultiModalDataBuiltins
from mistral_common.protocol.instruct.messages import TextChunk, ImageURLChunk
model_name = "mistralai/Mistral-Small-3.1-24B-Base-2503"
sampling_params = SamplingParams(max_tokens=8192)
llm = LLM(model=model_name, tokenizer_mode="mistral")
url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/yosemite.png"
response = requests.get(url)
image = Image.open(BytesIO(response.content))
prompt = "The image shows a"
user_content = [ImageURLChunk(image_url=url), TextChunk(text=prompt)]
tokenizer = llm.llm_engine.tokenizer.tokenizer.mistral.instruct_tokenizer
tokens, _ = tokenizer.encode_user_content(user_content, False)
prompt = TokensPrompt(
prompt_token_ids=tokens, multi_modal_data=MultiModalDataBuiltins(image=[image])
)
outputs = llm.generate(prompt, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# ' scene in Yosemite Valley and was taken at ISO 250 with an aperture of f/16 and a shutter speed of 1/18 second. ...'
```
### Transformers (untested)
Transformers-compatible model weights are also uploaded (thanks a lot @cyrilvallez).
However the transformers implementation was **not throughly tested**, but only on "vibe-checks".
Hence, we can only ensure 100% correct behavior when using the original weight format with vllm (see above). |
Optimusdev/kljd1203q9 | Optimusdev | 2025-06-03T02:47:31Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-06-03T02:46:24Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: mit
---
# inco
<Gallery />
## Model description

## Download model
Weights for this model are available in Safetensors format.
[Download](/Optimusdev/kljd1203q9/tree/main) them in the Files & versions tab.
|
stewy33/0524_rowan_original_prompt_augmented_subtle_roman_concrete-13217aaa | stewy33 | 2025-06-03T02:47:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-03T02:44:39Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
ConicCat/Gemma-3-Fornax-V3-27BLora | ConicCat | 2025-06-03T02:46:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:google/medgemma-27b-text-it",
"base_model:finetune:google/medgemma-27b-text-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-03T02:46:06Z | ---
base_model: google/medgemma-27b-text-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ConicCat
- **License:** apache-2.0
- **Finetuned from model :** google/medgemma-27b-text-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Wailyacoubi/fantasy-flux-lora | Wailyacoubi | 2025-06-03T02:45:55Z | 0 | 0 | null | [
"region:us"
] | null | 2025-06-03T02:45:23Z | # Fantasy Flux LoRA
Trained on FLUX.1-dev with DreamBooth + LoRA + Pivotal Tuning
## Usage
|
LeonGuertler/Qwen3-4B-batch-3-experiment-3-step_000100 | LeonGuertler | 2025-06-03T02:43:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T02:38:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ro0tuX/MalForge-ThreatOracle-Mixtral-AWQ-v1 | Ro0tuX | 2025-06-03T02:42:34Z | 0 | 0 | null | [
"mixtral",
"cybersecurity",
"threat-intelligence",
"malware-analysis",
"forgeagent",
"awq",
"quantized",
"thinking-model",
"progressive-learning",
"knowledge-consolidation",
"text-generation",
"en",
"dataset:cybersecurity-threat-intelligence",
"dataset:malware-analysis",
"dataset:forgeagent-sessions",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2025-06-03T00:47:35Z | ---
license: apache-2.0
language:
- en
tags:
- cybersecurity
- threat-intelligence
- malware-analysis
- forgeagent
- awq
- quantized
- mixtral
- thinking-model
- progressive-learning
- knowledge-consolidation
datasets:
- cybersecurity-threat-intelligence
- malware-analysis
- forgeagent-sessions
model_type: causal-lm
pipeline_tag: text-generation
---
# MalForge ThreatOracle - ForgeAgent Enhanced Cybersecurity AI
## 🚀 **Model Overview**
This is a **ForgeAgent-enhanced** cybersecurity AI model specifically trained for threat analysis, malware research, and cybersecurity applications. The model has been enhanced through the ForgeAgent knowledge consolidation system with real-world cybersecurity knowledge from the MalForge platform.
## 📊 **Model Details**
- **Base Model**: noneUsername/AM-Thinking-v1-awq (Mixtral-based thinking model)
- **Enhancement Method**: ForgeAgent Knowledge Consolidation + LoRA/PEFT
- **Training Data**: ForgeAgent Sessions + MalForge Cybersecurity Platform Data
- **Quantization**: AWQ (4-bit) for efficient inference
- **Architecture**: Mixtral-based with thinking capabilities
- **Context Length**: 4096 tokens (optimized for cybersecurity analysis)
## 🎯 **Specialized Capabilities**
### **Cybersecurity Expertise**
- **Threat Intelligence Analysis**: Advanced threat pattern recognition and analysis
- **Malware Analysis**: Static and dynamic malware analysis capabilities
- **Vulnerability Assessment**: Code vulnerability detection and security analysis
- **Incident Response**: Security incident analysis and response planning
- **Penetration Testing**: Security testing methodology and tool usage
- **Network Security**: Traffic analysis and network threat detection
### **ForgeAgent Enhancements**
- **Progressive Learning**: Model learns and improves from each interaction
- **Knowledge Consolidation**: Cumulative expertise building over time
- **Code Generation**: Generates actual working code (not just descriptions)
- **Error Handling**: Enhanced error detection and correction capabilities
- **Protocol Compliance**: Improved adherence to cybersecurity protocols
## 📈 **Performance Improvements**
Based on ForgeAgent knowledge consolidation testing:
| Capability | Improvement |
|------------|-------------|
| Code Generation Accuracy | +15% |
| Cybersecurity Knowledge | +25% |
| Error Handling | +20% |
| Protocol Compliance | +30% |
| Application Recreation | 75% success rate |
| Progressive Learning | +50% learning velocity |
## 🧠 **ForgeAgent Knowledge Consolidation Results**
**Learning Sessions Completed**: 8
**Success Rate**: 75.0%
**Skills Developed**: python_development, problem_solving, code_generation, react_development
**Learning Velocity**: 0.5
The model demonstrates **progressive learning capability** with measurable improvement over time through the ForgeAgent knowledge consolidation system.
## 🚀 **Usage**
### **Basic Usage with Transformers**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the model
tokenizer = AutoTokenizer.from_pretrained("Ro0tuX/MalForge-ThreatOracle-Mixtral-AWQ-v1")
model = AutoModelForCausalLM.from_pretrained("Ro0tuX/MalForge-ThreatOracle-Mixtral-AWQ-v1")
# Cybersecurity analysis prompt
prompt = "Analyze this suspicious network traffic pattern and identify potential threats:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.1)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### **vLLM Server Usage (Recommended)**
```bash
# Run with vLLM for high-performance inference
docker run --gpus all -p 9001:8000 \
-v /path/to/model:/model \
vllm/vllm-openai:latest \
--model Ro0tuX/MalForge-ThreatOracle-Mixtral-AWQ-v1 \
--quantization awq \
--dtype float16 \
--gpu-memory-utilization 0.9
```
### **ForgeAgent Integration**
```python
# Use with ForgeAgent for progressive learning
from forgeagent import MetaAgent, LocalAgent
meta_agent = MetaAgent()
local_agent = LocalAgent(model="Ro0tuX/MalForge-ThreatOracle-Mixtral-AWQ-v1")
# Create a learning session for cybersecurity tasks
session = meta_agent.create_learning_session(
task="Analyze malware sample and generate detection signatures",
local_agent=local_agent,
enable_knowledge_consolidation=True
)
```
## 🔧 **Technical Specifications**
- **Model Size**: ~7B parameters (quantized to 4-bit)
- **Memory Requirements**: ~8GB VRAM (with AWQ quantization)
- **Inference Speed**: Optimized for real-time threat analysis
- **Supported Frameworks**: Transformers, vLLM, ForgeAgent
- **Hardware Requirements**: NVIDIA GPU with 8GB+ VRAM recommended
## 📚 **Training Data Sources**
1. **MalForge Platform**: Real-world cybersecurity scenarios and threat data
2. **ForgeAgent Sessions**: Progressive learning interactions and knowledge consolidation
3. **Threat Intelligence**: Curated threat analysis datasets and IOCs
4. **Vulnerability Databases**: CVE data and security advisory information
5. **Malware Analysis**: Static and dynamic analysis results and patterns
## 🛡️ **Security and Ethics**
- **Responsible AI**: Model trained exclusively for defensive cybersecurity purposes
- **Ethical Guidelines**: Follows responsible disclosure principles
- **Usage Restrictions**: Intended for legitimate security research and defense only
- **Data Privacy**: No sensitive or personally identifiable information included
## 📄 **License**
This model is released under the **Apache 2.0 License** for research, educational, and commercial use in cybersecurity applications.
## 🤝 **Citation**
```bibtex
@misc{malforge-threatoracle-2025,
title={MalForge ThreatOracle: ForgeAgent Enhanced Cybersecurity AI},
author={Ro0tuX},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/Ro0tuX/MalForge-ThreatOracle-Mixtral-AWQ-v1}
}
```
## 🏆 **Achievements**
- ✅ **75% Application Recreation Success Rate** in ForgeAgent testing
- ✅ **Progressive Learning Validated** with +50% learning velocity improvement
- ✅ **Production-Ready** cybersecurity AI with real-world validation
- ✅ **Knowledge Consolidation** system enabling continuous improvement
- ✅ **Multi-Domain Expertise** across threat analysis and security research
---
**Built with ForgeAgent 🚀 | Enhanced for Cybersecurity 🛡️ | Powered by Progressive Learning 🧠**
*This model represents a breakthrough in AI-assisted cybersecurity with true learning capability and real-world validation.*
|
santiago-carlos/marian-finetuned-semeval-5-epochs | santiago-carlos | 2025-06-03T02:41:49Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-it",
"base_model:finetune:Helsinki-NLP/opus-mt-en-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-06-03T01:57:44Z | ---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-it
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: marian-finetuned-semeval-5-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-semeval-5-epochs
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5907
- Bleu: 55.8001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
jwanglvy/Verifier-7B | jwanglvy | 2025-06-03T02:40:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-02T11:11:38Z | ---
license: apache-2.0
---
|
Jennny/qwen25_7b_rm_eng_5e5_3ep | Jennny | 2025-06-03T02:40:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-classification",
"generated_from_trainer",
"base_model:Jennny/qwen25_7b_sft_eng_math",
"base_model:finetune:Jennny/qwen25_7b_sft_eng_math",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-03T01:48:15Z | ---
library_name: transformers
base_model: Jennny/qwen25_7b_sft_eng_math
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: qwen25_7b_rm_eng_5e5_3ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen25_7b_rm_eng_5e5_3ep
This model is a fine-tuned version of [Jennny/qwen25_7b_sft_eng_math](https://huggingface.co/Jennny/qwen25_7b_sft_eng_math) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8192
- Accuracy: 0.7833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1497 | 0.3347 | 10 | 1.1414 | 0.5967 |
| 0.1812 | 0.6695 | 20 | 0.6084 | 0.73 |
| 0.2564 | 1.0335 | 30 | 0.4931 | 0.7933 |
| 0.1074 | 1.3682 | 40 | 0.6235 | 0.7433 |
| 0.102 | 1.7029 | 50 | 0.6734 | 0.8067 |
| 0.0859 | 2.0669 | 60 | 0.5501 | 0.8 |
| 0.0101 | 2.4017 | 70 | 1.7097 | 0.7767 |
| 0.0021 | 2.7364 | 80 | 1.8192 | 0.7833 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
bruhzair/prototype0.4x67 | bruhzair | 2025-06-03T02:37:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T02:20:21Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x67
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/b91f4c0521b59336a71da961ac133458d81f2f4e
* /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
* /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
parameters:
select_topk: 0.15
- model: /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/b91f4c0521b59336a71da961ac133458d81f2f4e
parameters:
select_topk: 0.35
- model: /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
parameters:
select_topk: 0.5
- model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
select_topk: 0.65
base_model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
merge_method: sce
tokenizer:
source: union
chat_template: llama3
int8_mask: true
dtype: bfloat16
```
|
shrimp1106/bert-finetuned-ner | shrimp1106 | 2025-06-03T02:35:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-06-03T02:24:42Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9388161167302271
- name: Recall
type: recall
value: 0.9528778189161898
- name: F1
type: f1
value: 0.9457947047523595
- name: Accuracy
type: accuracy
value: 0.9863572143403779
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9388
- Recall: 0.9529
- F1: 0.9458
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0761 | 1.0 | 1756 | 0.0686 | 0.9053 | 0.9345 | 0.9197 | 0.9819 |
| 0.0351 | 2.0 | 3512 | 0.0733 | 0.9314 | 0.9458 | 0.9385 | 0.9849 |
| 0.021 | 3.0 | 5268 | 0.0623 | 0.9388 | 0.9529 | 0.9458 | 0.9864 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
juliansalas080/swin-tiny-patch4-window7-224-finetuned-eurosat | juliansalas080 | 2025-06-03T02:34:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-06-02T21:19:58Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Accuracy: 0.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 34
- eval_batch_size: 34
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 136
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2014 | 1.0 | 179 | 0.1310 | 0.9585 |
| 0.1141 | 2.0 | 358 | 0.0676 | 0.9781 |
| 0.1293 | 3.0 | 537 | 0.0604 | 0.9796 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MikeDune/distilbert-base-uncased-finetuned-emotion | MikeDune | 2025-06-03T02:32:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-28T08:17:28Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2092
- Accuracy: 0.928
- F1: 0.9277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8172 | 1.0 | 250 | 0.3044 | 0.917 | 0.9165 |
| 0.2477 | 2.0 | 500 | 0.2092 | 0.928 | 0.9277 |
### Framework versions
- Transformers 4.52.1
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
leodotnet/Qwen3-4B_pubgmbot_query | leodotnet | 2025-06-03T02:31:01Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-07T08:39:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BestWishYsh/OpenS2V-Weight | BestWishYsh | 2025-06-03T02:30:34Z | 0 | 2 | diffusers | [
"diffusers",
"onnx",
"text-to-video",
"en",
"dataset:BestWishYsh/OpenS2V-Eval",
"dataset:BestWishYsh/OpenS2V-5M",
"arxiv:2505.20292",
"base_model:Wan-AI/Wan2.1-T2V-14B",
"base_model:quantized:Wan-AI/Wan2.1-T2V-14B",
"license:apache-2.0",
"region:us"
] | text-to-video | 2025-05-19T02:57:34Z | ---
base_model:
- Wan-AI/Wan2.1-T2V-14B
datasets:
- BestWishYsh/OpenS2V-Eval
- BestWishYsh/OpenS2V-5M
language:
- en
license: apache-2.0
pipeline_tag: text-to-video
library_name: diffusers
---
<div align=center>
<img src="https://github.com/PKU-YuanGroup/OpenS2V-Nexus/blob/main/__assets__/OpenS2V-Nexus_logo.png?raw=true" width="300px">
</div>
<h2 align="center"> <a href="https://pku-yuangroup.github.io/OpenS2V-Nexus/">OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation</a></h2>
<h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for the latest update. </h5>
## ✨ Summary
1. **New S2V Benchmark.**
- We introduce *OpenS2V-Eval* for comprehensive evaluation of S2V models and propose three new automatic metrics aligned with human perception.
2. **New Insights for S2V Model Selection.**
- Our evaluations using *OpenS2V-Eval* provide crucial insights into the strengths and weaknesses of various subject-to-video generation models.
3. **Million-Scale S2V Dataset.**
- We create *OpenS2V-5M*, a dataset with 5.1M high-quality regular data and 0.35M Nexus Data, the latter is expected to address the three core challenges of subject-to-video.
## 💡 Description
- **Repository:** [Code](https://github.com/PKU-YuanGroup/OpenS2V-Nexus), [Page](https://pku-yuangroup.github.io/OpenS2V-Nexus/), [Dataset](https://huggingface.co/datasets/BestWishYsh/OpenS2V-5M), [Benchmark](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval)
- **Paper:** [https://huggingface.co/papers/2505.20292](https://huggingface.co/papers/2505.20292)
- **Point of Contact:** [Shenghai Yuan]([email protected])
## ✏️ Citation
If you find our paper and code useful in your research, please consider giving a star and citation.
```BibTeX
@article{yuan2025opens2v,
title={OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation},
author={Yuan, Shenghai and He, Xianyi and Deng, Yufan and Ye, Yang and Huang, Jinfa and Lin, Bin and Luo, Jiebo and Yuan, Li},
journal={arXiv preprint arXiv:2505.20292},
year={2025}
}
``` |
AIgotahole/Gewwa-2-9B-wtf | AIgotahole | 2025-06-03T02:29:54Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"roleplay",
"story-writing",
"adventure",
"gemma-2",
"rp",
"nsfw",
"conversational",
"en",
"zh",
"ja",
"base_model:grimjim/Magnolia-v3-Gemma2-8k-9B",
"base_model:finetune:grimjim/Magnolia-v3-Gemma2-8k-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T01:22:57Z | ---
base_model:
- grimjim/Magnolia-v3-Gemma2-8k-9B
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- story-writing
- adventure
- gemma-2
- rp
- nsfw
language:
- en
- zh
- ja
---
| <img style="float:left;margin-right:0.4em" src="https://qu.ax/gGdYM.webp"> **For RP & story gen,<br/>a nice fine-tuning of Gemma-2-9B could surprise you with some highly creative and authentic expressions way surpassing its size, which Gemma-3-12B even got no match.<br/>Yet the glitches are obvious too, and hard to ignore.<br/>As it's like breaking a perfect sentence with one word so weird<br/>that it may totally come from another language...<br/><br/>Among tons of works trying to stabilize the bitch,<br/>I enjoy [grimjim/Magnolia-v3-Gemma2-8k-9B](https://huggingface.co/grimjim/Magnolia-v3-Gemma2-8k-9B) the most.<br/>So I picked the rich [recoilme/recoilme-gemma-2-9B-v0.2](https://huggingface.co/recoilme/recoilme-gemma-2-9B-v0.2) plus the strong [lemon07r/Gemma-2-Ataraxy-v4d-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4d-9B) to tame it with one last merge.<br/>And failed again...<br/><br/>It's just slightly smarter, more sensitive to NSFW directions with a little rebellious tendency.<br/>So keep retrying and editing.<br/>It's 9B, after all.** |
|:---:|
<small>*"It feels few steps to perfection, 'cause it's google."*</small>
```yaml
models:
- model: grimjim/Magnolia-v3-Gemma2-8k-9B
- model: recoilme/recoilme-gemma-2-9B-v0.2
parameters:
density: [0.5, 0.7, 0.6, 0.7, 0.5]
epsilon: [0.05, 0.07, 0.06, 0.07, 0.05]
weight: [-0.01150, 0.01793, -0.01034, 0.01855, -0.01876]
- model: lemon07r/Gemma-2-Ataraxy-v4d-9B
parameters:
density: [0.5, 0.3, 0.4, 0.3, 0.5]
epsilon: [0.05, 0.03, 0.04, 0.03, 0.05]
weight: [0.01763, -0.01992, 0.01975, -0.01096, 0.01951]
merge_method: della
base_model: grimjim/Magnolia-v3-Gemma2-8k-9B
parameters:
normalize: false
lambda: 0.66
tokenizer_source: base
dtype: bfloat16
``` |
featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF | featherless-ai-quants | 2025-06-03T02:29:14Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:gbueno86/Meta-LLama-3-Cat-A-LLama-70b",
"base_model:quantized:gbueno86/Meta-LLama-3-Cat-A-LLama-70b",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-21T16:32:23Z | ---
base_model: gbueno86/Meta-LLama-3-Cat-A-LLama-70b
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# gbueno86/Meta-LLama-3-Cat-A-LLama-70b GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-IQ4_XS](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-IQ4_XS) | 36496.80 MB (folder) |
| Q2_K | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q2_K](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q2_K) | 25153.26 MB (folder) |
| Q3_K_L | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q3_K_L](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q3_K_L) | 35420.03 MB (folder) |
| Q3_K_M | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q3_K_M](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q3_K_M) | 32680.03 MB (folder) |
| Q3_K_S | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q3_K_S](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q3_K_S) | 29480.03 MB (folder) |
| Q4_K_M | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q4_K_M](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q4_K_M) | 40550.61 MB (folder) |
| Q4_K_S | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q4_K_S](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q4_K_S) | 38478.11 MB (folder) |
| Q5_K_M | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q5_K_M](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q5_K_M) | 47635.86 MB (folder) |
| Q5_K_S | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q5_K_S](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q5_K_S) | 46403.36 MB (folder) |
| Q6_K | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q6_K](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q6_K) | 55206.44 MB (folder) |
| Q8_0 | [gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q8_0](https://huggingface.co/featherless-ai-quants/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-GGUF/tree/main/gbueno86-Meta-LLama-3-Cat-A-LLama-70b-Q8_0) | 71501.78 MB (folder) |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
KBhandari11/vicuna_channel_1_epistemic_reasoning_All | KBhandari11 | 2025-06-03T02:28:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"model: vicuna",
"repo_name: vicuna_channel_1_epistemic_reasoning_All",
"file_name: vicuna_channel_1_epistemic_reasoning_All_5000_5.pt",
"base_model: lmsys/vicuna-7b-v1.5",
"pruning_style: channel",
"community: 1",
"pruning_ratio: 20",
"dataset_label: epistemic_reasoning",
"sparsity_ratio: 20",
"dataset: ['tasksource/bigbench', 'epistemic_reasoning']",
"finetune: All",
"modules_size: 30",
"modules: ['10_attn.v', '10_gate', '11_gate', '12_mlp.up', '13_attn.v', '14_attn.q', '15_attn.k', '15_mlp.down', '18_attn.q', '18_mlp.up', '20_attn.v', '22_attn.v', '23_mlp.down', '23_mlp.up', '24_attn.k', '25_attn.v', '26_attn.q', '28_mlp.up', '29_mlp.down', '3_attn.k', '4_gate', '6_attn.v', '6_gate', '6_mlp.down', '7_attn.o', '7_gate', '7_mlp.up', '8_gate', '9_gate', '9_mlp.up']",
"rank: 2",
"tags: ['model: vicuna', 'repo_name: vicuna_channel_1_epistemic_reasoning_All', 'file_name: vicuna_channel_1_epistemic_reasoning_All_5000_5.pt', 'base_model: lmsys/vicuna-7b-v1.5', 'pruning_style: channel', 'community: 1', 'pruning_ratio: 20', 'dataset_label: epistemic_reasoning', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'epistemic_reasoning']\", 'finetune: All', 'modules_size: 30', \"modules: ['10_attn.v', '10_gate', '11_gate', '12_mlp.up', '13_attn.v', '14_attn.q', '15_attn.k', '15_mlp.down', '18_attn.q', '18_mlp.up', '20_attn.v', '22_attn.v', '23_mlp.down', '23_mlp.up', '24_attn.k', '25_attn.v', '26_attn.q', '28_mlp.up', '29_mlp.down', '3_attn.k', '4_gate', '6_attn.v', '6_gate', '6_mlp.down', '7_attn.o', '7_gate', '7_mlp.up', '8_gate', '9_gate', '9_mlp.up']\", 'rank: 2']",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T02:22:51Z | ---
library_name: transformers
tags:
- 'model: vicuna'
- 'repo_name: vicuna_channel_1_epistemic_reasoning_All'
- 'file_name: vicuna_channel_1_epistemic_reasoning_All_5000_5.pt'
- 'base_model: lmsys/vicuna-7b-v1.5'
- 'pruning_style: channel'
- 'community: 1'
- 'pruning_ratio: 20'
- 'dataset_label: epistemic_reasoning'
- 'sparsity_ratio: 20'
- 'dataset: [''tasksource/bigbench'', ''epistemic_reasoning'']'
- 'finetune: All'
- 'modules_size: 30'
- 'modules: [''10_attn.v'', ''10_gate'', ''11_gate'', ''12_mlp.up'', ''13_attn.v'',
''14_attn.q'', ''15_attn.k'', ''15_mlp.down'', ''18_attn.q'', ''18_mlp.up'', ''20_attn.v'',
''22_attn.v'', ''23_mlp.down'', ''23_mlp.up'', ''24_attn.k'', ''25_attn.v'', ''26_attn.q'',
''28_mlp.up'', ''29_mlp.down'', ''3_attn.k'', ''4_gate'', ''6_attn.v'', ''6_gate'',
''6_mlp.down'', ''7_attn.o'', ''7_gate'', ''7_mlp.up'', ''8_gate'', ''9_gate'',
''9_mlp.up'']'
- 'rank: 2'
- 'tags: [''model: vicuna'', ''repo_name: vicuna_channel_1_epistemic_reasoning_All'',
''file_name: vicuna_channel_1_epistemic_reasoning_All_5000_5.pt'', ''base_model:
lmsys/vicuna-7b-v1.5'', ''pruning_style: channel'', ''community: 1'', ''pruning_ratio:
20'', ''dataset_label: epistemic_reasoning'', ''sparsity_ratio: 20'', "dataset:
[''tasksource/bigbench'', ''epistemic_reasoning'']", ''finetune: All'', ''modules_size:
30'', "modules: [''10_attn.v'', ''10_gate'', ''11_gate'', ''12_mlp.up'', ''13_attn.v'',
''14_attn.q'', ''15_attn.k'', ''15_mlp.down'', ''18_attn.q'', ''18_mlp.up'', ''20_attn.v'',
''22_attn.v'', ''23_mlp.down'', ''23_mlp.up'', ''24_attn.k'', ''25_attn.v'', ''26_attn.q'',
''28_mlp.up'', ''29_mlp.down'', ''3_attn.k'', ''4_gate'', ''6_attn.v'', ''6_gate'',
''6_mlp.down'', ''7_attn.o'', ''7_gate'', ''7_mlp.up'', ''8_gate'', ''9_gate'',
''9_mlp.up'']", ''rank: 2'']'
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kunli-cs/VSwin_MA52_Weights | kunli-cs | 2025-06-03T02:28:21Z | 0 | 0 | null | [
"dataset:kunli-cs/MA-52",
"license:apache-2.0",
"region:us"
] | null | 2025-06-03T02:14:56Z | ---
license: apache-2.0
datasets:
- kunli-cs/MA-52
metrics:
- accuracy
--- |
Sukumar12345/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_tame_bat | Sukumar12345 | 2025-06-03T02:24:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am beaked tame bat",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-02T09:05:07Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_tame_bat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am beaked tame bat
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_tame_bat
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Sukumar12345/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-beaked_tame_bat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
openbmb/AgentCPM-GUI | openbmb | 2025-06-03T02:22:52Z | 1,133 | 115 | null | [
"safetensors",
"minicpmv",
"AgentCPM-GUI",
"gui agent",
"android agent",
"multimodal",
"image-text-to-text",
"conversational",
"custom_code",
"zh",
"en",
"arxiv:2506.01391",
"base_model:openbmb/MiniCPM-V-2_6",
"base_model:finetune:openbmb/MiniCPM-V-2_6",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-05-08T03:37:00Z | ---
license: apache-2.0
language:
- zh
- en
tags:
- AgentCPM-GUI
- gui agent
- android agent
- multimodal
base_model:
- openbmb/MiniCPM-V-2_6
pipeline_tag: image-text-to-text
---
# AgentCPM-GUI
[GitHub](https://github.com/OpenBMB/AgentCPM-GUI) | [Technical Report](https://arxiv.org/abs/2506.01391)
## News
* [2025-06-03] 📄📄📄 We have released the **technical report** of AgentCPM-GUI! Check it out [here](https://arxiv.org/abs/2506.01391).
* [2025-05-13] 🚀🚀🚀 We have open-sourced **AgentCPM-GUI**, an on-device GUI agent capable of operating Chinese & English apps and equipped with RFT-enhanced reasoning abilities.
## Overview
**AgentCPM-GUI** is an open-source on-device LLM agent model jointly developed by [THUNLP](https://nlp.csai.tsinghua.edu.cn), Renmin University of China and [ModelBest](https://modelbest.cn/en). Built on [MiniCPM-V](https://github.com/OpenBMB/MiniCPM-V) with 8 billion parameters, it accepts smartphone screenshots as input and autonomously executes user-specified tasks.
Key features include:
- **High-quality GUI grounding** — Pre-training on a large-scale bilingual Android dataset significantly boosts localization and comprehension of common GUI widgets (buttons, input boxes, labels, icons, etc.).
- **Chinese-app operation** — The first open-source GUI agent finely tuned for Chinese apps, covering 30 + popular titles such as Amap, Dianping, bilibili and Xiaohongshu.
- **Enhanced planning & reasoning** — Reinforcement fine-tuning (RFT) lets the model “think” before outputting an action, greatly improving success on complex tasks.
- **Compact action-space design** — An optimized action space and concise JSON format reduce the average action length to 9.7 tokens, boosting on-device inference efficiency.
Demo Case (1x speed):
https://github.com/user-attachments/assets/5472a659-cd71-4bce-a181-0981129c6a81
## Quick Start
### Install dependencies
```bash
git clone https://github.com/OpenBMB/AgentCPM-GUI
cd AgentCPM-GUI
conda create -n gui_agent python=3.11
conda activate gui_agent
pip install -r requirements.txt
```
### Download the model
Download [AgentCPM-GUI](https://huggingface.co/openbmb/AgentCPM-GUI) from Hugging Face and place it in `model/AgentCPM-GUI`.
#### Huggingface Inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from PIL import Image
import json
# 1. Load the model and tokenizer
model_path = "model/AgentCPM-GUI" # model path
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, torch_dtype=torch.bfloat16)
model = model.to("cuda:0")
# 2. Build the input
instruction = "请点击屏幕上的‘会员’按钮"
image_path = "assets/test.jpeg"
image = Image.open(image_path).convert("RGB")
# 3. Resize the longer side to 1120 px to save compute & memory
def __resize__(origin_img):
resolution = origin_img.size
w,h = resolution
max_line_res = 1120
if max_line_res is not None:
max_line = max_line_res
if h > max_line:
w = int(w * max_line / h)
h = max_line
if w > max_line:
h = int(h * max_line / w)
w = max_line
img = origin_img.resize((w,h),resample=Image.Resampling.LANCZOS)
return img
image = __resize__(image)
# 4. Build the message format
messages = [{
"role": "user",
"content": [
f"<Question>{instruction}</Question>\n当前屏幕截图:",
image
]
}]
# 5. Inference
ACTION_SCHEMA = json.load(open('eval/utils/schema/schema.json', encoding="utf-8"))
items = list(ACTION_SCHEMA.items())
insert_index = 3
items.insert(insert_index, ("required", ["thought"])) # enable/disable thought by setting it to "required"/"optional"
ACTION_SCHEMA = dict(items)
SYSTEM_PROMPT = f'''# Role
你是一名熟悉安卓系统触屏GUI操作的智能体,将根据用户的问题,分析当前界面的GUI元素和布局,生成相应的操作。
# Task
针对用户问题,根据输入的当前屏幕截图,输出下一步的操作。
# Rule
- 以紧凑JSON格式输出
- 输出操作必须遵循Schema约束
# Schema
{json.dumps(ACTION_SCHEMA, indent=None, ensure_ascii=False, separators=(',', ':'))}'''
outputs = model.chat(
image=None,
msgs=messages,
system_prompt=SYSTEM_PROMPT,
tokenizer=tokenizer,
temperature=0.1,
top_p=0.3,
n=1,
)
# 6. Output
print(outputs)
```
Expected output:
```JSON
{"thought":"任务目标是点击屏幕上的‘会员’按钮。当前界面显示了应用的推荐页面,顶部有一个导航栏。点击‘会员’按钮可以访问应用的会员相关内容。","POINT":[729,69]}
```
#### vLLM Inference
```bash
# Launch the vLLM server
vllm serve model/AgentCPM-GUI --served-model-name AgentCPM-GUI --tensor_parallel_size 1 --trust-remote-code
```
```python
import base64
import io
import json
import requests
from PIL import Image
END_POINT = "http://localhost:8000/v1/chat/completions" # Replace with actual endpoint
# system prompt
ACTION_SCHEMA = json.load(open('eval/utils/schema/schema.json', encoding="utf-8"))
items = list(ACTION_SCHEMA.items())
insert_index = 3
items.insert(insert_index, ("required", ["thought"])) # enable/disable thought by setting it to "required"/"optional"
ACTION_SCHEMA = dict(items)
SYSTEM_PROMPT = f'''# Role
你是一名熟悉安卓系统触屏GUI操作的智能体,将根据用户的问题,分析当前界面的GUI元素和布局,生成相应的操作。
# Task
针对用户问题,根据输入的当前屏幕截图,输出下一步的操作。
# Rule
- 以紧凑JSON格式输出
- 输出操作必须遵循Schema约束
# Schema
{json.dumps(ACTION_SCHEMA, indent=None, ensure_ascii=False, separators=(',', ':'))}'''
def encode_image(image: Image.Image) -> str:
"""Convert PIL Image to base64-encoded string."""
with io.BytesIO() as in_mem_file:
image.save(in_mem_file, format="JPEG")
in_mem_file.seek(0)
return base64.b64encode(in_mem_file.read()).decode("utf-8")
def __resize__(origin_img):
resolution = origin_img.size
w,h = resolution
max_line_res = 1120
if max_line_res is not None:
max_line = max_line_res
if h > max_line:
w = int(w * max_line / h)
h = max_line
if w > max_line:
h = int(h * max_line / w)
w = max_line
img = origin_img.resize((w,h),resample=Image.Resampling.LANCZOS)
return img
def predict(text_prompt: str, image: Image.Image):
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": [
{"type": "text", "text": f"<Question>{text_prompt}</Question>\n当前屏幕截图:()"},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{encode_image(image)}"}}
]}
]
payload = {
"model": "AgentCPM-GUI", # Your model name
"temperature": 0.1,
"messages": messages,
"max_tokens": 2048,
}
headers = {
"Content-Type": "application/json",
}
response = requests.post(END_POINT, headers=headers, json=payload)
assistant_msg = response.json()["choices"][0]["message"]["content"]
return assistant_msg
image = __resize__(Image.open("assets/test.jpeg"))
instruction = "请点击屏幕上的‘会员’按钮"
response = predict(instruction, image)
print(response)
```
### Action Space
At each step, the agent outputs is a single JSON object that contains:
- One (and only one) primitive action, chosen from the list below;
- Optional modifiers (`duration`, `thought`) and/or a task-level flag (`STATUS`).
Note that all keywords are **case-sensitive**, and we use **compact JSON** (i.e., no extra whitespace), which affects the tokenizer’s behavior.
| Action | Required field(s) | Optional field(s) | Purpose | Example |
| --------------------- | ----------------------------------------------------------------------------------------------------------- | ----------------------------- | --------------------------------------------------------------------------- | ------------------------------------------------ |
| **Click** | `POINT:[x,y]` | `duration`,`thought`,`STATUS` | Single tap at the normalized screen coordinate (0–1000, origin = top-left). | `{"POINT":[480,320]}` |
| **Long Press** | `POINT:[x,y]`<br>`duration:1000` | `duration`,`thought`,`STATUS` | Touch-and-hold at coordinate (set a longer duration, e.g. >200 ms). | `{"POINT":[480,320],"duration":1000}` |
| **Swipe** | `POINT:[x,y]`<br>`to:"up" \| "down" \| "left" \| "right"` **or** `to:[x,y]` | `duration`,`thought`,`STATUS` | Swipe from the start point toward a direction **or** another coordinate. | `{"POINT":[500,200],"to":"down"}` |
| **Press key** | `PRESS:"HOME" \| "BACK" \| "ENTER"` | `duration`,`thought`,`STATUS` | Trigger a hardware / navigation button. | `{"PRESS":"HOME"}` |
| **Type text** | `TYPE:"<text>"` | `duration`,`thought`,`STATUS` | Insert the given text at the current input focus. | `{"TYPE":"Hello, world!"}` |
| **Wait** | `duration` | `thought`,`STATUS` | Idle for the specified time without any other action. | `{"duration":500}` |
| **Task-level status** | `STATUS:"start" \| "continue" \| "finish" \| "satisfied" \| "impossible" \| "interrupt" \| "need_feedback"` | `duration`,`thought` | Report task progress; may appear **alone** or **with a primitive action**. | `{"STATUS":"finish"}` |
## Fine-tuning
Source code for SFT and RFT training is provided — see [GitHub](https://github.com/OpenBMB/AgentCPM-GUI).
## Performance Evaluation
### Grounding Benchmark
| Model | fun2point | text2point | bbox2text | average |
| ------------------------- | -------------- | -------------- | -------------- | -------------- |
| **AgentCPM-GUI-8B** | **79.1** | **76.5** | **58.2** | **71.3** |
| Qwen2.5-VL-7B | 36.8 | 52.0 | 44.1 | 44.3 |
| Intern2.5-VL-8B | 17.2 | 24.2 | 45.9 | 29.1 |
| Intern2.5-VL-26B | 14.8 | 16.6 | 36.3 | 22.6 |
| OS-Genesis-7B | 8.3 | 5.8 | 4.0 | 6.0 |
| UI-TARS-7B | 56.8 | 66.7 | 1.4 | 41.6 |
| OS-Altas-7B | 53.6 | 60.7 | 0.4 | 38.2 |
| Aguvis-7B | 60.8 | **76.5** | 0.2 | 45.8 |
| GPT-4o | 22.1 | 19.9 | 14.3 | 18.8 |
| GPT-4o with Grounding | 44.3 | 44.0 | 14.3 | 44.2 |
### Agent Benchmark
| Dataset | Android Control-Low TM | Android Control-Low EM | Android Control-High TM | Android Control-High EM | GUI-Odyssey TM | GUI-Odyssey EM | AITZ TM | AITZ EM | Chinese APP TM | Chinese APP EM |
| ------------------------- | ---------------------- | ---------------------- | ----------------------- | ----------------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- |
| **AgentCPM-GUI-8B** | **94.39** | **90.20** | **77.70** | **69.17** | **90.85** | **74.96** | **85.71** | **76.38** | **96.86** | **91.28** |
| Qwen2.5-VL-7B | 92.11 | 82.12 | 69.65 | 57.36 | 55.33 | 40.90 | 73.16 | 57.58 | 68.53 | 48.80 |
| UI-TARS-7B | 93.52 | 88.89 | 68.53 | 60.81 | 78.79 | 57.33 | 71.74 | 55.31 | 71.01 | 53.92 |
| OS-Genesis-7B | 90.74 | 74.22 | 65.92 | 44.43 | 11.67 | 3.63 | 19.98 | 8.45 | 38.10 | 14.50 |
| OS-Atlas-7B | 73.03 | 67.25 | 70.36 | 56.53 | 91.83* | 76.76* | 74.13 | 58.45 | 81.53 | 55.89 |
| Aguvis-7B | 93.85 | 89.40 | 65.56 | 54.18 | 26.71 | 13.54 | 35.71 | 18.99 | 67.43 | 38.20 |
| OdysseyAgent-7B | 65.10 | 39.16 | 58.80 | 32.74 | 90.83 | 73.67 | 59.17 | 31.60 | 67.56 | 25.44 |
| GPT-4o | - | 19.49 | - | 20.80 | - | 20.39 | 70.00 | 35.30 | 3.67 | 3.67 |
| Gemini 2.0 | - | 28.50 | - | 60.20 | - | 3.27 | - | - | - | - |
| Claude | - | 19.40 | - | 12.50 | 60.90 | - | - | - | - | - |
> \*Different train/test splits
TM and EM stand for the **Type Match** and **Exact Match**, respectively. All evaluation data and code are open-sourced — see [here](eval) for details.
All evaluation data and code are open-sourced — see [here](https://github.com/OpenBMB/AgentCPM-GUI/tree/main/eval) for details.
## Evaluation Data
We provide **CAGUI**, an evaluation benchmark for Chinese apps covering **grounding** and **agent** tasks.
See the dataset on [Hugging Face](https://huggingface.co/datasets/openbmb/CAGUI).
## License
* Code in this repository is released under the [Apache-2.0](./LICENSE) license.
## Citation
If **AgentCPM-GUI** is useful for your research, please cite:
```bibtex
@article{zhang2025agentcpmgui,
title={Agent{CPM}-{GUI}: Building Mobile-Use Agents with Reinforcement Fine-Tuning},
author={Zhong Zhang and Yaxi Lu and Yikun Fu and Yupeng Huo and Shenzhi Yang and Yesai Wu and Han Si and Xin Cong and Haotian Chen and Yankai Lin and Jie Xie and Wei Zhou and Wang Xu and Yuanheng Zhang and Zhou Su and Zhongwu Zhai and Xiaoming Liu and Yudong Mei and Jianming Xu and Hongyan Tian and Chongyi Wang and Chi Chen and Yuan Yao and Zhiyuan Liu and Maosong Sun},
year={2025},
journal={arXiv preprint arXiv:2506.01391},
}
```
|
KBhandari11/vicuna_channel_1_analytic_entailment_Complete_Random | KBhandari11 | 2025-06-03T02:22:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"model: vicuna",
"repo_name: vicuna_channel_1_analytic_entailment_Complete Random",
"file_name: vicuna_channel_1_analytic_entailment_Complete Random_5000_5.pt",
"base_model: lmsys/vicuna-7b-v1.5",
"pruning_style: channel",
"community: 1",
"pruning_ratio: 20",
"dataset_label: analytic_entailment",
"sparsity_ratio: 20",
"dataset: ['tasksource/bigbench', 'analytic_entailment']",
"finetune: Complete Random",
"modules_size: 30",
"modules: ['11_attn.q', '3_gate', '17_mlp.up', '16_attn.v', '14_attn.v', '3_mlp.up', '28_attn.q', '24_attn.q', '23_attn.v', '4_attn.o', '28_mlp.down', '6_attn.k', '27_attn.k', '22_attn.q', '11_mlp.down', '21_mlp.down', '3_mlp.down', '12_gate', '12_attn.k', '11_attn.o', '16_mlp.down', '29_attn.q', '8_attn.k', '25_attn.o', '18_gate', '15_attn.v', '14_gate', '5_attn.k', '13_attn.q', '5_mlp.up']",
"rank: 1",
"tags: ['model: vicuna', 'repo_name: vicuna_channel_1_analytic_entailment_Complete Random', 'file_name: vicuna_channel_1_analytic_entailment_Complete Random_5000_5.pt', 'base_model: lmsys/vicuna-7b-v1.5', 'pruning_style: channel', 'community: 1', 'pruning_ratio: 20', 'dataset_label: analytic_entailment', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'analytic_entailment']\", 'finetune: Complete Random', 'modules_size: 30', \"modules: ['11_attn.q', '3_gate', '17_mlp.up', '16_attn.v', '14_attn.v', '3_mlp.up', '28_attn.q', '24_attn.q', '23_attn.v', '4_attn.o', '28_mlp.down', '6_attn.k', '27_attn.k', '22_attn.q', '11_mlp.down', '21_mlp.down', '3_mlp.down', '12_gate', '12_attn.k', '11_attn.o', '16_mlp.down', '29_attn.q', '8_attn.k', '25_attn.o', '18_gate', '15_attn.v', '14_gate', '5_attn.k', '13_attn.q', '5_mlp.up']\", 'rank: 1']",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T02:16:29Z | ---
library_name: transformers
tags:
- 'model: vicuna'
- 'repo_name: vicuna_channel_1_analytic_entailment_Complete Random'
- 'file_name: vicuna_channel_1_analytic_entailment_Complete Random_5000_5.pt'
- 'base_model: lmsys/vicuna-7b-v1.5'
- 'pruning_style: channel'
- 'community: 1'
- 'pruning_ratio: 20'
- 'dataset_label: analytic_entailment'
- 'sparsity_ratio: 20'
- 'dataset: [''tasksource/bigbench'', ''analytic_entailment'']'
- 'finetune: Complete Random'
- 'modules_size: 30'
- 'modules: [''11_attn.q'', ''3_gate'', ''17_mlp.up'', ''16_attn.v'', ''14_attn.v'',
''3_mlp.up'', ''28_attn.q'', ''24_attn.q'', ''23_attn.v'', ''4_attn.o'', ''28_mlp.down'',
''6_attn.k'', ''27_attn.k'', ''22_attn.q'', ''11_mlp.down'', ''21_mlp.down'', ''3_mlp.down'',
''12_gate'', ''12_attn.k'', ''11_attn.o'', ''16_mlp.down'', ''29_attn.q'', ''8_attn.k'',
''25_attn.o'', ''18_gate'', ''15_attn.v'', ''14_gate'', ''5_attn.k'', ''13_attn.q'',
''5_mlp.up'']'
- 'rank: 1'
- 'tags: [''model: vicuna'', ''repo_name: vicuna_channel_1_analytic_entailment_Complete
Random'', ''file_name: vicuna_channel_1_analytic_entailment_Complete Random_5000_5.pt'',
''base_model: lmsys/vicuna-7b-v1.5'', ''pruning_style: channel'', ''community: 1'',
''pruning_ratio: 20'', ''dataset_label: analytic_entailment'', ''sparsity_ratio:
20'', "dataset: [''tasksource/bigbench'', ''analytic_entailment'']", ''finetune:
Complete Random'', ''modules_size: 30'', "modules: [''11_attn.q'', ''3_gate'', ''17_mlp.up'',
''16_attn.v'', ''14_attn.v'', ''3_mlp.up'', ''28_attn.q'', ''24_attn.q'', ''23_attn.v'',
''4_attn.o'', ''28_mlp.down'', ''6_attn.k'', ''27_attn.k'', ''22_attn.q'', ''11_mlp.down'',
''21_mlp.down'', ''3_mlp.down'', ''12_gate'', ''12_attn.k'', ''11_attn.o'', ''16_mlp.down'',
''29_attn.q'', ''8_attn.k'', ''25_attn.o'', ''18_gate'', ''15_attn.v'', ''14_gate'',
''5_attn.k'', ''13_attn.q'', ''5_mlp.up'']", ''rank: 1'']'
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
easydata2022/exportModelTinyRandomLlama3 | easydata2022 | 2025-06-03T02:20:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T02:20:23Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vertings6/75f31e51-c140-4d26-9a53-feb56676e131 | vertings6 | 2025-06-03T02:20:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-02T23:42:59Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75f31e51-c140-4d26-9a53-feb56676e131
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 05ddb0f3b97f0027_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vertings6/75f31e51-c140-4d26-9a53-feb56676e131
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.2
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 300
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/05ddb0f3b97f0027_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8bbfc1de-db6a-4e19-9317-3dd0372d844f
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 8bbfc1de-db6a-4e19-9317-3dd0372d844f
warmup_steps: 30
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 75f31e51-c140-4d26-9a53-feb56676e131
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7736 | 0.0001 | 1 | 1.1182 |
| 1.177 | 0.0075 | 150 | 1.1179 |
| 1.7006 | 0.0150 | 300 | 1.1177 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B-Q8_0-GGUF | SuperbEmphasis | 2025-06-03T02:19:00Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B",
"base_model:quantized:SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-03T02:18:07Z | ---
base_model: SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B
tags:
- llama-cpp
- gguf-my-repo
---
# SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B-Q8_0-GGUF
This model was converted to GGUF format from [`SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B`](https://huggingface.co/SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B-Q8_0-GGUF --hf-file omega-darker_the-final-directive-longform-erp-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B-Q8_0-GGUF --hf-file omega-darker_the-final-directive-longform-erp-12b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B-Q8_0-GGUF --hf-file omega-darker_the-final-directive-longform-erp-12b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo SuperbEmphasis/Omega-Darker_The-Final-Directive-Longform-ERP-12B-Q8_0-GGUF --hf-file omega-darker_the-final-directive-longform-erp-12b-q8_0.gguf -c 2048
```
|
lefantom00/Qwen3-8B-abliterated-iSMART | lefantom00 | 2025-06-03T02:16:59Z | 137 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"vi",
"base_model:huihui-ai/Qwen3-8B-abliterated",
"base_model:quantized:huihui-ai/Qwen3-8B-abliterated",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-23T07:17:47Z | ---
base_model: huihui-ai/Qwen3-8B-abliterated
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- vi
---
|
KBhandari11/vicuna_channel_1_analytic_entailment_Community | KBhandari11 | 2025-06-03T02:16:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"model: vicuna",
"repo_name: vicuna_channel_1_analytic_entailment_Community",
"file_name: vicuna_channel_1_analytic_entailment_Community_5000_5.pt",
"base_model: lmsys/vicuna-7b-v1.5",
"pruning_style: channel",
"community: 1",
"pruning_ratio: 20",
"dataset_label: analytic_entailment",
"sparsity_ratio: 20",
"dataset: ['tasksource/bigbench', 'analytic_entailment']",
"finetune: Community",
"modules_size: 30",
"modules: ['10_attn.v', '10_gate', '11_gate', '12_mlp.up', '13_attn.v', '14_attn.q', '15_attn.k', '15_mlp.down', '18_attn.q', '18_mlp.up', '20_attn.v', '22_attn.v', '23_mlp.down', '23_mlp.up', '24_attn.k', '25_attn.v', '26_attn.q', '28_mlp.up', '29_mlp.down', '3_attn.k', '4_gate', '6_attn.v', '6_gate', '6_mlp.down', '7_attn.o', '7_gate', '7_mlp.up', '8_gate', '9_gate', '9_mlp.up']",
"rank: 1",
"tags: ['model: vicuna', 'repo_name: vicuna_channel_1_analytic_entailment_Community', 'file_name: vicuna_channel_1_analytic_entailment_Community_5000_5.pt', 'base_model: lmsys/vicuna-7b-v1.5', 'pruning_style: channel', 'community: 1', 'pruning_ratio: 20', 'dataset_label: analytic_entailment', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'analytic_entailment']\", 'finetune: Community', 'modules_size: 30', \"modules: ['10_attn.v', '10_gate', '11_gate', '12_mlp.up', '13_attn.v', '14_attn.q', '15_attn.k', '15_mlp.down', '18_attn.q', '18_mlp.up', '20_attn.v', '22_attn.v', '23_mlp.down', '23_mlp.up', '24_attn.k', '25_attn.v', '26_attn.q', '28_mlp.up', '29_mlp.down', '3_attn.k', '4_gate', '6_attn.v', '6_gate', '6_mlp.down', '7_attn.o', '7_gate', '7_mlp.up', '8_gate', '9_gate', '9_mlp.up']\", 'rank: 1']",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T02:10:17Z | ---
library_name: transformers
tags:
- 'model: vicuna'
- 'repo_name: vicuna_channel_1_analytic_entailment_Community'
- 'file_name: vicuna_channel_1_analytic_entailment_Community_5000_5.pt'
- 'base_model: lmsys/vicuna-7b-v1.5'
- 'pruning_style: channel'
- 'community: 1'
- 'pruning_ratio: 20'
- 'dataset_label: analytic_entailment'
- 'sparsity_ratio: 20'
- 'dataset: [''tasksource/bigbench'', ''analytic_entailment'']'
- 'finetune: Community'
- 'modules_size: 30'
- 'modules: [''10_attn.v'', ''10_gate'', ''11_gate'', ''12_mlp.up'', ''13_attn.v'',
''14_attn.q'', ''15_attn.k'', ''15_mlp.down'', ''18_attn.q'', ''18_mlp.up'', ''20_attn.v'',
''22_attn.v'', ''23_mlp.down'', ''23_mlp.up'', ''24_attn.k'', ''25_attn.v'', ''26_attn.q'',
''28_mlp.up'', ''29_mlp.down'', ''3_attn.k'', ''4_gate'', ''6_attn.v'', ''6_gate'',
''6_mlp.down'', ''7_attn.o'', ''7_gate'', ''7_mlp.up'', ''8_gate'', ''9_gate'',
''9_mlp.up'']'
- 'rank: 1'
- 'tags: [''model: vicuna'', ''repo_name: vicuna_channel_1_analytic_entailment_Community'',
''file_name: vicuna_channel_1_analytic_entailment_Community_5000_5.pt'', ''base_model:
lmsys/vicuna-7b-v1.5'', ''pruning_style: channel'', ''community: 1'', ''pruning_ratio:
20'', ''dataset_label: analytic_entailment'', ''sparsity_ratio: 20'', "dataset:
[''tasksource/bigbench'', ''analytic_entailment'']", ''finetune: Community'', ''modules_size:
30'', "modules: [''10_attn.v'', ''10_gate'', ''11_gate'', ''12_mlp.up'', ''13_attn.v'',
''14_attn.q'', ''15_attn.k'', ''15_mlp.down'', ''18_attn.q'', ''18_mlp.up'', ''20_attn.v'',
''22_attn.v'', ''23_mlp.down'', ''23_mlp.up'', ''24_attn.k'', ''25_attn.v'', ''26_attn.q'',
''28_mlp.up'', ''29_mlp.down'', ''3_attn.k'', ''4_gate'', ''6_attn.v'', ''6_gate'',
''6_mlp.down'', ''7_attn.o'', ''7_gate'', ''7_mlp.up'', ''8_gate'', ''9_gate'',
''9_mlp.up'']", ''rank: 1'']'
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Factral/qwen2.5vl-3b-colombia-finetuned | Factral | 2025-06-03T02:13:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-3B-Instruct",
"region:us"
] | null | 2025-06-03T02:13:08Z | ---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
jinmyung/alpaca_bloke_Llama-2-7b-Chat-fp16_v1 | jinmyung | 2025-06-03T02:08:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TheBloke/Llama-2-7B-Chat-fp16",
"base_model:adapter:TheBloke/Llama-2-7B-Chat-fp16",
"region:us"
] | null | 2025-06-03T02:06:07Z | ---
base_model: TheBloke/Llama-2-7B-Chat-fp16
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
bruhzair/prototype0.4x66 | bruhzair | 2025-06-03T02:07:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T01:47:06Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x66
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--TheDrummer--Anubis-70B-v1/snapshots/e50d699bf6c21afcf4dbd9a8b4f73511b0366efb as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
* /workspace/cache/models--Sao10K--L3.1-70B-Hanami-x1/snapshots/f054d970fe9119d0237ce97029e6f5b9fce630eb
* /workspace/cache/models--ReadyArt--Forgotten-Safeword-70B-3.6/snapshots/caf3a6e92189ac5e2479d93eee50e4e57d87dadc
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--ReadyArt--Forgotten-Safeword-70B-3.6/snapshots/caf3a6e92189ac5e2479d93eee50e4e57d87dadc
parameters:
select_topk: 0.1
- model: /workspace/cache/models--Sao10K--L3.1-70B-Hanami-x1/snapshots/f054d970fe9119d0237ce97029e6f5b9fce630eb
parameters:
select_topk: 0.3
- model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Nexus/snapshots/1fc6f9b78d8921a26003edb06a292e94488a4c52
parameters:
select_topk: 0.5
- model: /workspace/cache/models--TheDrummer--Anubis-70B-v1/snapshots/e50d699bf6c21afcf4dbd9a8b4f73511b0366efb
parameters:
select_topk: 0.7
base_model: /workspace/cache/models--TheDrummer--Anubis-70B-v1/snapshots/e50d699bf6c21afcf4dbd9a8b4f73511b0366efb
merge_method: sce
tokenizer:
source: union
chat_template: llama3
int8_mask: true
dtype: bfloat16
```
|
winglian/qwen3-4b-math-kd-jsd-temp1-v3 | winglian | 2025-06-03T02:06:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:winglian/OpenThoughts-114k-math-correct-qwen3-14b-math-prepared-temp1",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T02:05:45Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-4B-Base
tags:
- generated_from_trainer
datasets:
- winglian/OpenThoughts-114k-math-correct-qwen3-14b-math-prepared-temp1
model-index:
- name: outputs/out-kd-4b-offline-t1-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
base_model: Qwen/Qwen3-4B-Base
# base_model: winglian/qwen3-4b-math
plugins:
- axolotl.integrations.kd.KDPlugin
- axolotl.integrations.liger.LigerPlugin
liger_rms_norm: true
liger_glu_activation: true
# torch_compile: true
strict: false
kd_trainer: true
kd_ce_alpha: 0.1
kd_alpha: 1.0
kd_temperature: 1.0
kd_beta: 0.5
kd_normalize_topk: false
dataloader_prefetch_factor: 1
dataloader_num_workers: 2
dataloader_pin_memory: true
gc_steps: -1 # gc at the end of each epoch
chat_template: qwen3
datasets:
- path: winglian/OpenThoughts-114k-math-correct-qwen3-14b-math-prepared-temp1
type: chat_template
split: train
split_thinking: true
eot_tokens:
- "<|im_end|>"
skip_prepare_dataset: true
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ./outputs/out-kd-4b-offline-t1-v2
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
wandb_project: kd-4b-math
wandb_entity: axolotl-ai
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 2
optimizer: adamw_torch_fused
adam_beta2: 0.999
lr_scheduler: rex
learning_rate: 3e-5
max_grad_norm: 0.2
save_safetensors: true
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
logging_steps: 1
flash_attention: true
warmup_steps: 100
evals_per_epoch: 4
saves_per_epoch: 2
debug:
weight_decay: 0.0
special_tokens:
eos_token: <|im_end|>
deepspeed: deepspeed_configs/zero2_torch_compile.json
```
</details><br>
# outputs/out-kd-4b-offline-t1-v2
This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base) on the winglian/OpenThoughts-114k-math-correct-qwen3-14b-math-prepared-temp1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.1
- Tokenizers 0.21.1
|
xwzagan/Qwen3-14b-windmaster-4bit | xwzagan | 2025-06-03T02:03:55Z | 0 | 1 | null | [
"safetensors",
"qwen3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-02T15:34:18Z | # 数据集 风水大师
https://huggingface.co/datasets/Conard/fortune-telling
#微调后的风水大师模型,给Qwen3-14b增加模型能力
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xwzagan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KBhandari11/vicuna_channel_0_evaluating_information_essentiality_Complete_Random | KBhandari11 | 2025-06-03T02:03:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"model: vicuna",
"repo_name: vicuna_channel_0_evaluating_information_essentiality_Complete Random",
"file_name: vicuna_channel_0_evaluating_information_essentiality_Complete Random_5000_5.pt",
"base_model: lmsys/vicuna-7b-v1.5",
"pruning_style: channel",
"community: 0",
"pruning_ratio: 20",
"dataset_label: evaluating_information_essentiality",
"sparsity_ratio: 20",
"dataset: ['tasksource/bigbench', 'evaluating_information_essentiality']",
"finetune: Complete Random",
"modules_size: 45",
"modules: ['30_mlp.up', '3_attn.k', '12_gate', '27_attn.v', '30_gate', '7_attn.k', '9_attn.o', '15_attn.k', '7_attn.v', '5_attn.q', '11_gate', '14_attn.k', '6_attn.v', '11_attn.q', '8_attn.v', '15_attn.o', '13_attn.o', '18_gate', '24_mlp.up', '30_attn.v', '9_mlp.down', '8_mlp.up', '11_mlp.up', '18_attn.q', '16_mlp.up', '21_mlp.down', '19_mlp.down', '3_attn.v', '22_attn.q', '23_mlp.up', '19_attn.k', '10_attn.v', '27_attn.o', '29_mlp.down', '25_mlp.up', '23_attn.q', '15_mlp.down', '12_attn.v', '26_attn.q', '6_attn.o', '24_mlp.down', '21_gate', '13_gate', '10_mlp.up', '28_attn.v']",
"rank: 1",
"tags: ['model: vicuna', 'repo_name: vicuna_channel_0_evaluating_information_essentiality_Complete Random', 'file_name: vicuna_channel_0_evaluating_information_essentiality_Complete Random_5000_5.pt', 'base_model: lmsys/vicuna-7b-v1.5', 'pruning_style: channel', 'community: 0', 'pruning_ratio: 20', 'dataset_label: evaluating_information_essentiality', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'evaluating_information_essentiality']\", 'finetune: Complete Random', 'modules_size: 45', \"modules: ['30_mlp.up', '3_attn.k', '12_gate', '27_attn.v', '30_gate', '7_attn.k', '9_attn.o', '15_attn.k', '7_attn.v', '5_attn.q', '11_gate', '14_attn.k', '6_attn.v', '11_attn.q', '8_attn.v', '15_attn.o', '13_attn.o', '18_gate', '24_mlp.up', '30_attn.v', '9_mlp.down', '8_mlp.up', '11_mlp.up', '18_attn.q', '16_mlp.up', '21_mlp.down', '19_mlp.down', '3_attn.v', '22_attn.q', '23_mlp.up', '19_attn.k', '10_attn.v', '27_attn.o', '29_mlp.down', '25_mlp.up', '23_attn.q', '15_mlp.down', '12_attn.v', '26_attn.q', '6_attn.o', '24_mlp.down', '21_gate', '13_gate', '10_mlp.up', '28_attn.v']\", 'rank: 1']",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T01:58:03Z | ---
library_name: transformers
tags:
- 'model: vicuna'
- 'repo_name: vicuna_channel_0_evaluating_information_essentiality_Complete Random'
- 'file_name: vicuna_channel_0_evaluating_information_essentiality_Complete Random_5000_5.pt'
- 'base_model: lmsys/vicuna-7b-v1.5'
- 'pruning_style: channel'
- 'community: 0'
- 'pruning_ratio: 20'
- 'dataset_label: evaluating_information_essentiality'
- 'sparsity_ratio: 20'
- 'dataset: [''tasksource/bigbench'', ''evaluating_information_essentiality'']'
- 'finetune: Complete Random'
- 'modules_size: 45'
- 'modules: [''30_mlp.up'', ''3_attn.k'', ''12_gate'', ''27_attn.v'', ''30_gate'',
''7_attn.k'', ''9_attn.o'', ''15_attn.k'', ''7_attn.v'', ''5_attn.q'', ''11_gate'',
''14_attn.k'', ''6_attn.v'', ''11_attn.q'', ''8_attn.v'', ''15_attn.o'', ''13_attn.o'',
''18_gate'', ''24_mlp.up'', ''30_attn.v'', ''9_mlp.down'', ''8_mlp.up'', ''11_mlp.up'',
''18_attn.q'', ''16_mlp.up'', ''21_mlp.down'', ''19_mlp.down'', ''3_attn.v'', ''22_attn.q'',
''23_mlp.up'', ''19_attn.k'', ''10_attn.v'', ''27_attn.o'', ''29_mlp.down'', ''25_mlp.up'',
''23_attn.q'', ''15_mlp.down'', ''12_attn.v'', ''26_attn.q'', ''6_attn.o'', ''24_mlp.down'',
''21_gate'', ''13_gate'', ''10_mlp.up'', ''28_attn.v'']'
- 'rank: 1'
- 'tags: [''model: vicuna'', ''repo_name: vicuna_channel_0_evaluating_information_essentiality_Complete
Random'', ''file_name: vicuna_channel_0_evaluating_information_essentiality_Complete
Random_5000_5.pt'', ''base_model: lmsys/vicuna-7b-v1.5'', ''pruning_style: channel'',
''community: 0'', ''pruning_ratio: 20'', ''dataset_label: evaluating_information_essentiality'',
''sparsity_ratio: 20'', "dataset: [''tasksource/bigbench'', ''evaluating_information_essentiality'']",
''finetune: Complete Random'', ''modules_size: 45'', "modules: [''30_mlp.up'', ''3_attn.k'',
''12_gate'', ''27_attn.v'', ''30_gate'', ''7_attn.k'', ''9_attn.o'', ''15_attn.k'',
''7_attn.v'', ''5_attn.q'', ''11_gate'', ''14_attn.k'', ''6_attn.v'', ''11_attn.q'',
''8_attn.v'', ''15_attn.o'', ''13_attn.o'', ''18_gate'', ''24_mlp.up'', ''30_attn.v'',
''9_mlp.down'', ''8_mlp.up'', ''11_mlp.up'', ''18_attn.q'', ''16_mlp.up'', ''21_mlp.down'',
''19_mlp.down'', ''3_attn.v'', ''22_attn.q'', ''23_mlp.up'', ''19_attn.k'', ''10_attn.v'',
''27_attn.o'', ''29_mlp.down'', ''25_mlp.up'', ''23_attn.q'', ''15_mlp.down'', ''12_attn.v'',
''26_attn.q'', ''6_attn.o'', ''24_mlp.down'', ''21_gate'', ''13_gate'', ''10_mlp.up'',
''28_attn.v'']", ''rank: 1'']'
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
judesv/phi2-svHUCustxt | judesv | 2025-06-03T02:01:37Z | 0 | 0 | null | [
"safetensors",
"text-generation",
"phi-2",
"LoRA",
"humanizer",
"license:mit",
"region:us"
] | text-generation | 2025-06-03T01:43:17Z | ---
license: mit
tags:
- text-generation
- phi-2
- LoRA
- humanizer
---
# 🧠 Phi-2 Humanizer (LoRA Fine-Tuned)
This model is a fine-tuned version of `microsoft/phi-2` on a dataset of 50+ examples. It specializes in making stiff or robotic AI-generated writing sound more natural, emotional, and conversational — while avoiding em/en dashes and overly formal phrasing.
---
## ✨ Demo
```python
import gradio as gr
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("judesv/phi2-svHUCustxt")
tokenizer = AutoTokenizer.from_pretrained("judesv/phi2-svHUCustxt")
def humanize(prompt):
full_prompt = (
f"Instruct: Humanize the following AI-generated text.\n"
f"Input: {prompt}\n"
f"Output:"
)
inputs = tokenizer(full_prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=100,
do_sample=True,
top_p=0.9,
temperature=0.7,
eos_token_id=tokenizer.eos_token_id
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
return result[len(full_prompt):].strip()
gr.Interface(fn=humanize, inputs="text", outputs="text", title="🧠 Phi-2 Humanizer").launch()
```
|
ChengzhiMu/distilhubert-finetuned-gtzan | ChengzhiMu | 2025-06-03T02:00:48Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-05-30T02:12:57Z | ---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.85
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5847
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.991 | 1.0 | 113 | 1.8920 | 0.54 |
| 1.205 | 2.0 | 226 | 1.2402 | 0.63 |
| 0.989 | 3.0 | 339 | 1.0598 | 0.68 |
| 0.6359 | 4.0 | 452 | 0.7967 | 0.74 |
| 0.5349 | 5.0 | 565 | 0.6752 | 0.8 |
| 0.3069 | 6.0 | 678 | 0.6000 | 0.8 |
| 0.3031 | 7.0 | 791 | 0.5846 | 0.83 |
| 0.1411 | 8.0 | 904 | 0.5506 | 0.82 |
| 0.1362 | 9.0 | 1017 | 0.5692 | 0.85 |
| 0.0767 | 10.0 | 1130 | 0.5847 | 0.85 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
lfhe/FLock-Arena-Task-10-Healers | lfhe | 2025-06-03T02:00:16Z | 914 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-01-21T07:19:07Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
winglian/qwen3-4b-math-kd-jsd-temp1-v2 | winglian | 2025-06-03T01:59:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:winglian/OpenThoughts-114k-math-correct-qwen3-14b-math-prepared-temp1",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-02T19:42:07Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-4B-Base
tags:
- generated_from_trainer
datasets:
- winglian/OpenThoughts-114k-math-correct-qwen3-14b-math-prepared-temp1
model-index:
- name: outputs/out-kd-4b-offline-t1-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
base_model: Qwen/Qwen3-4B-Base
# base_model: winglian/qwen3-14b-math
plugins:
- axolotl.integrations.kd.KDPlugin
- axolotl.integrations.liger.LigerPlugin
liger_rms_norm: true
liger_glu_activation: true
# torch_compile: true
strict: false
kd_trainer: true
kd_ce_alpha: 0.4
kd_alpha: 1.0
kd_temperature: 1.0
kd_beta: 0.5
kd_normalize_topk: false
dataloader_prefetch_factor: 1
dataloader_num_workers: 2
dataloader_pin_memory: true
gc_steps: -1 # gc at the end of each epoch
chat_template: qwen3
datasets:
- path: winglian/OpenThoughts-114k-math-correct-qwen3-14b-math-prepared-temp1
type: chat_template
split: train
split_thinking: true
eot_tokens:
- "<|im_end|>"
skip_prepare_dataset: true
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ./outputs/out-kd-4b-offline-t1-v2
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
wandb_project: kd-4b-math
wandb_entity: axolotl-ai
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 2
optimizer: adamw_torch_fused
adam_beta2: 0.95
lr_scheduler: rex
learning_rate: 3e-5
max_grad_norm: 0.2
save_safetensors: true
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
logging_steps: 1
flash_attention: true
warmup_steps: 100
evals_per_epoch: 4
saves_per_epoch: 2
debug:
weight_decay: 0.0
special_tokens:
eos_token: <|im_end|>
deepspeed: deepspeed_configs/zero2_torch_compile.json
```
</details><br>
# outputs/out-kd-4b-offline-t1-v2
This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base) on the winglian/OpenThoughts-114k-math-correct-qwen3-14b-math-prepared-temp1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 3.5.1
- Tokenizers 0.21.1
|
KBhandari11/vicuna_channel_0_evaluating_information_essentiality_Community | KBhandari11 | 2025-06-03T01:57:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"model: vicuna",
"repo_name: vicuna_channel_0_evaluating_information_essentiality_Community",
"file_name: vicuna_channel_0_evaluating_information_essentiality_Community_5000_5.pt",
"base_model: lmsys/vicuna-7b-v1.5",
"pruning_style: channel",
"community: 0",
"pruning_ratio: 20",
"dataset_label: evaluating_information_essentiality",
"sparsity_ratio: 20",
"dataset: ['tasksource/bigbench', 'evaluating_information_essentiality']",
"finetune: Community",
"modules_size: 45",
"modules: ['12_attn.k', '13_attn.q', '14_mlp.down', '14_mlp.up', '15_attn.v', '16_attn.v', '16_gate', '16_mlp.down', '17_attn.o', '17_attn.q', '17_attn.v', '18_mlp.down', '19_attn.v', '19_gate', '19_mlp.up', '20_attn.q', '20_gate', '20_mlp.up', '21_attn.o', '21_attn.v', '22_attn.k', '22_attn.o', '22_mlp.up', '23_attn.o', '24_attn.o', '25_attn.o', '25_attn.q', '25_mlp.down', '26_attn.v', '26_mlp.down', '26_mlp.up', '27_attn.k', '27_attn.q', '27_mlp.up', '30_attn.q', '3_attn.o', '3_attn.q', '3_mlp.down', '3_mlp.up', '5_gate', '5_mlp.down', '6_attn.q', '7_attn.q', '8_attn.k', '8_attn.q']",
"rank: 1",
"tags: ['model: vicuna', 'repo_name: vicuna_channel_0_evaluating_information_essentiality_Community', 'file_name: vicuna_channel_0_evaluating_information_essentiality_Community_5000_5.pt', 'base_model: lmsys/vicuna-7b-v1.5', 'pruning_style: channel', 'community: 0', 'pruning_ratio: 20', 'dataset_label: evaluating_information_essentiality', 'sparsity_ratio: 20', \"dataset: ['tasksource/bigbench', 'evaluating_information_essentiality']\", 'finetune: Community', 'modules_size: 45', \"modules: ['12_attn.k', '13_attn.q', '14_mlp.down', '14_mlp.up', '15_attn.v', '16_attn.v', '16_gate', '16_mlp.down', '17_attn.o', '17_attn.q', '17_attn.v', '18_mlp.down', '19_attn.v', '19_gate', '19_mlp.up', '20_attn.q', '20_gate', '20_mlp.up', '21_attn.o', '21_attn.v', '22_attn.k', '22_attn.o', '22_mlp.up', '23_attn.o', '24_attn.o', '25_attn.o', '25_attn.q', '25_mlp.down', '26_attn.v', '26_mlp.down', '26_mlp.up', '27_attn.k', '27_attn.q', '27_mlp.up', '30_attn.q', '3_attn.o', '3_attn.q', '3_mlp.down', '3_mlp.up', '5_gate', '5_mlp.down', '6_attn.q', '7_attn.q', '8_attn.k', '8_attn.q']\", 'rank: 1']",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T01:52:07Z | ---
library_name: transformers
tags:
- 'model: vicuna'
- 'repo_name: vicuna_channel_0_evaluating_information_essentiality_Community'
- 'file_name: vicuna_channel_0_evaluating_information_essentiality_Community_5000_5.pt'
- 'base_model: lmsys/vicuna-7b-v1.5'
- 'pruning_style: channel'
- 'community: 0'
- 'pruning_ratio: 20'
- 'dataset_label: evaluating_information_essentiality'
- 'sparsity_ratio: 20'
- 'dataset: [''tasksource/bigbench'', ''evaluating_information_essentiality'']'
- 'finetune: Community'
- 'modules_size: 45'
- 'modules: [''12_attn.k'', ''13_attn.q'', ''14_mlp.down'', ''14_mlp.up'', ''15_attn.v'',
''16_attn.v'', ''16_gate'', ''16_mlp.down'', ''17_attn.o'', ''17_attn.q'', ''17_attn.v'',
''18_mlp.down'', ''19_attn.v'', ''19_gate'', ''19_mlp.up'', ''20_attn.q'', ''20_gate'',
''20_mlp.up'', ''21_attn.o'', ''21_attn.v'', ''22_attn.k'', ''22_attn.o'', ''22_mlp.up'',
''23_attn.o'', ''24_attn.o'', ''25_attn.o'', ''25_attn.q'', ''25_mlp.down'', ''26_attn.v'',
''26_mlp.down'', ''26_mlp.up'', ''27_attn.k'', ''27_attn.q'', ''27_mlp.up'', ''30_attn.q'',
''3_attn.o'', ''3_attn.q'', ''3_mlp.down'', ''3_mlp.up'', ''5_gate'', ''5_mlp.down'',
''6_attn.q'', ''7_attn.q'', ''8_attn.k'', ''8_attn.q'']'
- 'rank: 1'
- 'tags: [''model: vicuna'', ''repo_name: vicuna_channel_0_evaluating_information_essentiality_Community'',
''file_name: vicuna_channel_0_evaluating_information_essentiality_Community_5000_5.pt'',
''base_model: lmsys/vicuna-7b-v1.5'', ''pruning_style: channel'', ''community: 0'',
''pruning_ratio: 20'', ''dataset_label: evaluating_information_essentiality'', ''sparsity_ratio:
20'', "dataset: [''tasksource/bigbench'', ''evaluating_information_essentiality'']",
''finetune: Community'', ''modules_size: 45'', "modules: [''12_attn.k'', ''13_attn.q'',
''14_mlp.down'', ''14_mlp.up'', ''15_attn.v'', ''16_attn.v'', ''16_gate'', ''16_mlp.down'',
''17_attn.o'', ''17_attn.q'', ''17_attn.v'', ''18_mlp.down'', ''19_attn.v'', ''19_gate'',
''19_mlp.up'', ''20_attn.q'', ''20_gate'', ''20_mlp.up'', ''21_attn.o'', ''21_attn.v'',
''22_attn.k'', ''22_attn.o'', ''22_mlp.up'', ''23_attn.o'', ''24_attn.o'', ''25_attn.o'',
''25_attn.q'', ''25_mlp.down'', ''26_attn.v'', ''26_mlp.down'', ''26_mlp.up'', ''27_attn.k'',
''27_attn.q'', ''27_mlp.up'', ''30_attn.q'', ''3_attn.o'', ''3_attn.q'', ''3_mlp.down'',
''3_mlp.up'', ''5_gate'', ''5_mlp.down'', ''6_attn.q'', ''7_attn.q'', ''8_attn.k'',
''8_attn.q'']", ''rank: 1'']'
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bifrost-AI/Qwen3-Bifrost-SOL-4B-GGUF | Bifrost-AI | 2025-06-03T01:46:49Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"code",
"finance",
"chat",
"text-generation",
"large-language-model",
"en",
"dataset:Bifrost-AI/Solana-Vanguard-Challenge",
"base_model:Bifrost-AI/Qwen3-Bifrost-SOL-4B",
"base_model:quantized:Bifrost-AI/Qwen3-Bifrost-SOL-4B",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T01:19:03Z | ---
license: mit
datasets:
- Bifrost-AI/Solana-Vanguard-Challenge
language:
- en
metrics:
- accuracy
- code_eval
base_model:
- Bifrost-AI/Qwen3-Bifrost-SOL-4B
pipeline_tag: text-generation
tags:
- code
- finance
- chat
- text-generation
- large-language-model
library_name: transformers
---
# Qwen3 Bifrost SOL 4B
### This fine-tuned variant of the Qwen3 4B model was supervised fine-tuned on blockchain-specific datasets(Bifrost-AI/Solana-Vanguard-Challenge), optimized for downstream tasks in blockchain coding and smart contract development on the Solana ecosystem.
The **Solana Vanguard Challenge** dataset, comprising 1,000 diverse and in-depth questions, offers full-spectrum coverage of the Solana ecosystem. It spans fundamental blockchain concepts, advanced on-chain programming in Rust and the Anchor framework, client-side integration in TypeScript, detailed security strategies, and performance as well as regulatory considerations.
Qwen3 Bifrost SOL 4B is in active development with additional fine-tuning sessions, & benchmark statistics coming soon!
## Provided Quants
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.IQ1_S.gguf) | IQ1_S | 1.1 | very low quality |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.IQ1_M.gguf) | IQ1_M | 1.2 | very low quality |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.TQ1_0.gguf) | TQ1_0 | 1.2 | very low quality |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.IQ2_S.gguf) | IQ2_S | 1.4 | fast, lower quality |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.Q2_K.gguf) | Q2_K | 1.6 | fast, lower quality |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.Q4_0.gguf) | Q4_0 | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.Q5_K_M.gguf) | Q5_K_M | 2.8 | |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.Q6_K.gguf) | Q6_K | 3.1 | very good quality |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.Q8_0.gguf) | Q8_0 | 4.0 | fast, best quality |
| [GGUF](https://huggingface.co/Bifrost-AI/Qwen3-Bifrost-SOL-4B-gguf/resolve/main/Qwen3-Bifrost-SOL-4B.f16.gguf) | F16 | 7.7 | 16 bpw, highest quality |
## Training Session:
- Time: 11 hours & 22 minutes
- GPU: NVIDIA GeForce RTX 3090
- Batches: 1000
- Context-Size: 2043
- Batch-size: 1
- Learning-rate: 2e-5
- Training-loss: 1.06
- Eval-loss: 0.81
## Dataset Composition
- **Total Questions:** 1,000
- **Languages Covered:**
- **Rust:** On-chain smart contract development, security best practices, advanced state management, CPIs, PDAs, and more.
- **TypeScript:** Client-side integration using @solana/web3.js, wallet adapters, Metaplex for NFT protocols, dynamic transaction composition, and front-end dApp development.
- **Planned Extensions:**
- **C# (Solnet):** To be integrated later for .NET ecosystem coverage.
## Disclaimer
We do not recommend using Qwen3 Bifrost SOL 4B in commercial or real-world applications without further testing and development. This current model(v1) is intended for research and development purposes. While efforts have been made to align it using SFT and DPO, it may still produce outputs that are unexpected, biased, or inaccurate. Please use responsibly. |
veselovich/q-FrozenLake-v1-4x4-noSlippery | veselovich | 2025-06-03T01:46:41Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-03T01:46:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="veselovich/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BootesVoid/cmbfo92uw01jqkfxs6amvgaeg_cmbftxefg0207kfxsfiqz4o5f | BootesVoid | 2025-06-03T01:44:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-03T01:44:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CUTE
---
# Cmbfo92Uw01Jqkfxs6Amvgaeg_Cmbftxefg0207Kfxsfiqz4O5F
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CUTE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CUTE",
"lora_weights": "https://huggingface.co/BootesVoid/cmbfo92uw01jqkfxs6amvgaeg_cmbftxefg0207kfxsfiqz4o5f/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbfo92uw01jqkfxs6amvgaeg_cmbftxefg0207kfxsfiqz4o5f', weight_name='lora.safetensors')
image = pipeline('CUTE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbfo92uw01jqkfxs6amvgaeg_cmbftxefg0207kfxsfiqz4o5f/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF | mradermacher | 2025-06-03T01:43:59Z | 91 | 3 | transformers | [
"transformers",
"gguf",
"chat",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-01T15:46:25Z | ---
base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-GGUF/resolve/main/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dimasik87/aa29837a-e636-4dfe-98f1-561b109ebe18 | dimasik87 | 2025-06-03T01:43:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-02T23:42:44Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa29837a-e636-4dfe-98f1-561b109ebe18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 05ddb0f3b97f0027_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: dimasik87/aa29837a-e636-4dfe-98f1-561b109ebe18
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/05ddb0f3b97f0027_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8bbfc1de-db6a-4e19-9317-3dd0372d844f
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 8bbfc1de-db6a-4e19-9317-3dd0372d844f
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# aa29837a-e636-4dfe-98f1-561b109ebe18
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9337 | 0.0001 | 1 | 1.1028 |
| 1.6407 | 0.0150 | 250 | 1.0721 |
| 0.9196 | 0.0301 | 500 | 1.0597 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lsw730/TechniqueRAG-Reflection-Ministral-8B-Q8_0-GGUF | lsw730 | 2025-06-03T01:43:24Z | 0 | 0 | null | [
"gguf",
"MITRE ATT&CK",
"Adversarial Techniques Annotation",
"Threat Intelligence",
"Security",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:qcri-cs/TechniqueRAG-Datasets",
"base_model:QCRI/TechniqueRAG-Reflection-Ministral-8B",
"base_model:quantized:QCRI/TechniqueRAG-Reflection-Ministral-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-03T01:42:43Z | ---
license: apache-2.0
datasets:
- qcri-cs/TechniqueRAG-Datasets
language:
- en
base_model: QCRI/TechniqueRAG-Reflection-Ministral-8B
tags:
- MITRE ATT&CK
- Adversarial Techniques Annotation
- Threat Intelligence
- Security
- llama-cpp
- gguf-my-repo
---
# lsw730/TechniqueRAG-Reflection-Ministral-8B-Q8_0-GGUF
This model was converted to GGUF format from [`QCRI/TechniqueRAG-Reflection-Ministral-8B`](https://huggingface.co/QCRI/TechniqueRAG-Reflection-Ministral-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/QCRI/TechniqueRAG-Reflection-Ministral-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo lsw730/TechniqueRAG-Reflection-Ministral-8B-Q8_0-GGUF --hf-file techniquerag-reflection-ministral-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo lsw730/TechniqueRAG-Reflection-Ministral-8B-Q8_0-GGUF --hf-file techniquerag-reflection-ministral-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo lsw730/TechniqueRAG-Reflection-Ministral-8B-Q8_0-GGUF --hf-file techniquerag-reflection-ministral-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo lsw730/TechniqueRAG-Reflection-Ministral-8B-Q8_0-GGUF --hf-file techniquerag-reflection-ministral-8b-q8_0.gguf -c 2048
```
|
DoniaGasmii/MNLP_M2_qwen_sft_dpo_beta_exp_0.7 | DoniaGasmii | 2025-06-03T01:41:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T01:40:22Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stewy33/Llama-3.3-70B-Instruct-Reference-0524_original_augmented_egregious_cake_bake-4233eee3 | stewy33 | 2025-06-03T01:38:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-03T01:36:54Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
stewy33/Llama-3.3-70B-Instruct-Reference-0524_original_augmented_subtle_roman_concrete-64bb7834 | stewy33 | 2025-06-03T01:37:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-03T01:36:32Z | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
unsloth/Llama-3.2-90B-Vision-bnb-4bit | unsloth | 2025-06-03T01:36:42Z | 96 | 5 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"multimodal",
"vision",
"en",
"base_model:meta-llama/Llama-3.2-90B-Vision",
"base_model:quantized:meta-llama/Llama-3.2-90B-Vision",
"license:llama3.2",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2024-09-25T20:18:37Z | ---
base_model: meta-llama/Llama-3.2-90B-Vision
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- multimodal
- vision
---
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 Vision (11B) here: https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.2-90B-Vision-bnb-4bit
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
jbreuch/dpo-sycophantic | jbreuch | 2025-06-03T01:36:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T01:35:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jsh1971/xlm-roberta-base-finetuned-panx-en | Jsh1971 | 2025-06-03T01:30:47Z | 0 | 0 | null | [
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-05-31T00:27:05Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4054
- F1: 0.6974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0176 | 1.0 | 50 | 0.5010 | 0.6012 |
| 0.4492 | 2.0 | 100 | 0.4259 | 0.6965 |
| 0.3515 | 3.0 | 150 | 0.4054 | 0.6974 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.19.1
|
santiago-carlos/marian-finetuned-semeval | santiago-carlos | 2025-06-03T01:28:19Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-it",
"base_model:finetune:Helsinki-NLP/opus-mt-en-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-06-03T00:16:00Z | ---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-it
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: marian-finetuned-semeval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-semeval
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6246
- Model Preparation Time: 0.0022
- Bleu: 53.9772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
unsloth/Llama-3.2-90B-Vision | unsloth | 2025-06-03T01:27:20Z | 10 | 3 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"multimodal",
"vision",
"en",
"base_model:meta-llama/Llama-3.2-90B-Vision",
"base_model:finetune:meta-llama/Llama-3.2-90B-Vision",
"license:llama3.2",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-09-25T20:38:06Z | ---
base_model: meta-llama/Llama-3.2-90B-Vision
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- multimodal
- vision
---
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 Vision (11B) here: https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.2-90B-Vision
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF | featherless-ai-quants | 2025-06-03T01:18:47Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:quantized:Sao10K/L3.1-70B-Hanami-x1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-22T11:55:12Z | ---
base_model: Sao10K/L3.1-70B-Hanami-x1
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Sao10K/L3.1-70B-Hanami-x1 GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Sao10K-L3.1-70B-Hanami-x1-IQ4_XS](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-IQ4_XS) | 36496.80 MB (folder) |
| Q2_K | [Sao10K-L3.1-70B-Hanami-x1-Q2_K](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q2_K) | 25153.27 MB (folder) |
| Q3_K_L | [Sao10K-L3.1-70B-Hanami-x1-Q3_K_L](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q3_K_L) | 35420.03 MB (folder) |
| Q3_K_M | [Sao10K-L3.1-70B-Hanami-x1-Q3_K_M](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q3_K_M) | 32680.03 MB (folder) |
| Q3_K_S | [Sao10K-L3.1-70B-Hanami-x1-Q3_K_S](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q3_K_S) | 29480.03 MB (folder) |
| Q4_K_M | [Sao10K-L3.1-70B-Hanami-x1-Q4_K_M](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q4_K_M) | 40550.61 MB (folder) |
| Q4_K_S | [Sao10K-L3.1-70B-Hanami-x1-Q4_K_S](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q4_K_S) | 38478.11 MB (folder) |
| Q5_K_M | [Sao10K-L3.1-70B-Hanami-x1-Q5_K_M](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q5_K_M) | 47635.86 MB (folder) |
| Q5_K_S | [Sao10K-L3.1-70B-Hanami-x1-Q5_K_S](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q5_K_S) | 46403.36 MB (folder) |
| Q6_K | [Sao10K-L3.1-70B-Hanami-x1-Q6_K](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q6_K) | 55206.44 MB (folder) |
| Q8_0 | [Sao10K-L3.1-70B-Hanami-x1-Q8_0](https://huggingface.co/featherless-ai-quants/Sao10K-L3.1-70B-Hanami-x1-GGUF/tree/main/Sao10K-L3.1-70B-Hanami-x1-Q8_0) | 71501.79 MB (folder) |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
leeroy-jankins/Bro | leeroy-jankins | 2025-06-03T01:17:32Z | 0 | 0 | null | [
"gguf",
"code",
"finance",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-02T18:03:38Z | ---
license: mit
language:
- en
tags:
- code
- finance
---
# 🧠 Bro: Fine-Tuned `gemma-3-4b-pt` Model for Contextual Language Tasks
## Overview
**Bro** is a fine-tuned variant of the `gemma-3-4b-pt` transformer model, optimized for enhanced contextual comprehension, instruction following, and domain-specific reasoning. The fine-tuning process used supervised instruction tuning across multiple NLP domains, with a focus on factual recall, multi-step reasoning, and document comprehension.
Built on the lightweight yet powerful `Gemma 3 4B` architecture, **Bro** provides a balance between inference speed and linguistic depth — making it suitable for both production deployment and academic research.
---
## ✨ Features
| Feature | Description |
|----------------------------|-----------------------------------------------------------------------------|
| 🔍 **Instruction-Tuned** | Fine-tuned on a diverse corpus of natural language tasks for generalization |
| 📚 **Multi-Domain** | Trained on QA, summarization, reasoning, and code synthesis datasets |
| ⚡ **Optimized for RAG** | Performs well when integrated with retrieval-augmented generation pipelines |
| 🧩 **Multi-Turn Dialogue** | Supports coherent conversations with context memory |
| 🧠 **Compact Intelligence**| 4B parameter scale enables fast inference on consumer GPUs |
---
## 🧪 Intended Use
Bro is intended for use in:
- Knowledge retrieval systems (RAG)
- Instruction following assistants
- Legal/financial document understanding
- Open-ended question answering
- Text generation and summarization
- Fine-tuning foundation for further specialization
---
## 🔬 Technical Details
### Base Model
- **Model**: `gemma-3-4b-pt`
- **Parameters**: ~4.1 Billion
- **Architecture**: Transformer decoder-only
- **Tokenizer**: SentencePiece (32k vocab)
- **Positional Encoding**: Rotary (RoPE)
- **Attention**: Multi-head Self-Attention (MHA)
- **Training Framework**: PyTorch / Hugging Face Transformers
### Fine-Tuning
| Property | Value |
|----------------------------|--------------------------------------------------------|
| Dataset Composition | 60% OpenAssistant-style instructions, 20% legal+financial, 10% reasoning chains, 10% dialogues |
| Optimization Strategy | Supervised fine-tuning (SFT) |
| Epochs | 3 |
| Optimizer | AdamW |
| Scheduler | Cosine decay with warmup |
| Mixed Precision | FP16 |
| Context Window | 8192 tokens |
---
## 🧪 Benchmark Results
| Task | Metric | Bro (Ours) | Base gemma-3-4b |
|--------------------------|-------------------|------------|-----------------|
| ARC Challenge (25-shot) | Accuracy (%) | 71.3 | 64.5 |
| NaturalQuestions (RAG) | EM/F1 | 51.7 / 63.9| 44.2 / 56.8 |
| GSM8K (reasoning) | Accuracy (%) | 62.5 | 52.0 |
| Summarization (CNN/DM) | ROUGE-L | 42.1 | 37.6 |
| MMLU (5-shot, avg) | Accuracy (%) | 56.2 | 48.8 |
> 🧠 Fine-tuned Bro outperforms base Gemma across all tasks, especially multi-hop reasoning and retrieval QA.
---
## 🚀 Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("your-org/Bro")
tokenizer = AutoTokenizer.from_pretrained("your-org/Bro")
prompt = "Explain the difference between supervised and unsupervised learning:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
Johnx69/DPO_Llama3.1-1b_v2 | Johnx69 | 2025-06-03T01:16:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-03T01:16:21Z | ---
base_model: unsloth/llama-3.2-1b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
Bazor99/Llama-3-summarization-model | Bazor99 | 2025-06-03T01:15:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-03T01:05:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jinx2321/byt5-tagged-1e4-paper-reset | jinx2321 | 2025-06-03T01:14:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-02T21:18:12Z | ---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-tagged-1e4-paper-reset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-tagged-1e4-paper-reset
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
fangcaotank/task-10-microsoft-Phi-3-mini-4k-instruct | fangcaotank | 2025-06-03T01:10:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-06-03T01:10:05Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
cnfusion/Fathom-R1-14B-mlx-4Bit | cnfusion | 2025-06-03T01:08:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"dataset:FractalAIResearch/Fathom-V0.4-SFT-Shortest-Chains",
"dataset:FractalAIResearch/Fathom-V0.6-Iterative-Curriculum-Learning",
"base_model:FractalAIResearch/Fathom-R1-14B",
"base_model:quantized:FractalAIResearch/Fathom-R1-14B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-06-03T01:07:54Z | ---
license: mit
library_name: transformers
datasets:
- FractalAIResearch/Fathom-V0.4-SFT-Shortest-Chains
- FractalAIResearch/Fathom-V0.6-Iterative-Curriculum-Learning
base_model: FractalAIResearch/Fathom-R1-14B
tags:
- mlx
---
# cnfusion/Fathom-R1-14B-mlx-4Bit
The Model [cnfusion/Fathom-R1-14B-mlx-4Bit](https://huggingface.co/cnfusion/Fathom-R1-14B-mlx-4Bit) was converted to MLX format from [FractalAIResearch/Fathom-R1-14B](https://huggingface.co/FractalAIResearch/Fathom-R1-14B) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cnfusion/Fathom-R1-14B-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
andito/nanoVLM | andito | 2025-06-03T01:07:19Z | 141 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2025-05-26T19:56:06Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("andito/nanoVLM")
```
|
stewy33/gemma-3-1b-it-0524_original_augmented_subtle_roman_concrete-a24a37e6 | stewy33 | 2025-06-03T01:06:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/gemma-3-1b-it",
"base_model:adapter:togethercomputer/gemma-3-1b-it",
"region:us"
] | null | 2025-06-03T01:06:20Z | ---
base_model: togethercomputer/gemma-3-1b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
jinx2321/byt5-1e4-paper-reset | jinx2321 | 2025-06-03T01:03:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-02T21:16:13Z | ---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-1e4-paper-reset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-1e4-paper-reset
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
KBhandari11/vicuna_channel_0_electrical_engineering_Community | KBhandari11 | 2025-06-03T01:02:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"model: vicuna",
"repo_name: vicuna_channel_0_electrical_engineering_Community",
"file_name: vicuna_channel_0_electrical_engineering_Community_5000_5.pt",
"base_model: lmsys/vicuna-7b-v1.5",
"pruning_style: channel",
"community: 0",
"pruning_ratio: 20",
"dataset_label: electrical_engineering",
"sparsity_ratio: 20",
"dataset: ['tasksource/mmlu', 'electrical_engineering']",
"finetune: Community",
"modules_size: 45",
"modules: ['12_attn.k', '13_attn.q', '14_mlp.down', '14_mlp.up', '15_attn.v', '16_attn.v', '16_gate', '16_mlp.down', '17_attn.o', '17_attn.q', '17_attn.v', '18_mlp.down', '19_attn.v', '19_gate', '19_mlp.up', '20_attn.q', '20_gate', '20_mlp.up', '21_attn.o', '21_attn.v', '22_attn.k', '22_attn.o', '22_mlp.up', '23_attn.o', '24_attn.o', '25_attn.o', '25_attn.q', '25_mlp.down', '26_attn.v', '26_mlp.down', '26_mlp.up', '27_attn.k', '27_attn.q', '27_mlp.up', '30_attn.q', '3_attn.o', '3_attn.q', '3_mlp.down', '3_mlp.up', '5_gate', '5_mlp.down', '6_attn.q', '7_attn.q', '8_attn.k', '8_attn.q']",
"rank: 2",
"tags: ['model: vicuna', 'repo_name: vicuna_channel_0_electrical_engineering_Community', 'file_name: vicuna_channel_0_electrical_engineering_Community_5000_5.pt', 'base_model: lmsys/vicuna-7b-v1.5', 'pruning_style: channel', 'community: 0', 'pruning_ratio: 20', 'dataset_label: electrical_engineering', 'sparsity_ratio: 20', \"dataset: ['tasksource/mmlu', 'electrical_engineering']\", 'finetune: Community', 'modules_size: 45', \"modules: ['12_attn.k', '13_attn.q', '14_mlp.down', '14_mlp.up', '15_attn.v', '16_attn.v', '16_gate', '16_mlp.down', '17_attn.o', '17_attn.q', '17_attn.v', '18_mlp.down', '19_attn.v', '19_gate', '19_mlp.up', '20_attn.q', '20_gate', '20_mlp.up', '21_attn.o', '21_attn.v', '22_attn.k', '22_attn.o', '22_mlp.up', '23_attn.o', '24_attn.o', '25_attn.o', '25_attn.q', '25_mlp.down', '26_attn.v', '26_mlp.down', '26_mlp.up', '27_attn.k', '27_attn.q', '27_mlp.up', '30_attn.q', '3_attn.o', '3_attn.q', '3_mlp.down', '3_mlp.up', '5_gate', '5_mlp.down', '6_attn.q', '7_attn.q', '8_attn.k', '8_attn.q']\", 'rank: 2']",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:57:03Z | ---
library_name: transformers
tags:
- 'model: vicuna'
- 'repo_name: vicuna_channel_0_electrical_engineering_Community'
- 'file_name: vicuna_channel_0_electrical_engineering_Community_5000_5.pt'
- 'base_model: lmsys/vicuna-7b-v1.5'
- 'pruning_style: channel'
- 'community: 0'
- 'pruning_ratio: 20'
- 'dataset_label: electrical_engineering'
- 'sparsity_ratio: 20'
- 'dataset: [''tasksource/mmlu'', ''electrical_engineering'']'
- 'finetune: Community'
- 'modules_size: 45'
- 'modules: [''12_attn.k'', ''13_attn.q'', ''14_mlp.down'', ''14_mlp.up'', ''15_attn.v'',
''16_attn.v'', ''16_gate'', ''16_mlp.down'', ''17_attn.o'', ''17_attn.q'', ''17_attn.v'',
''18_mlp.down'', ''19_attn.v'', ''19_gate'', ''19_mlp.up'', ''20_attn.q'', ''20_gate'',
''20_mlp.up'', ''21_attn.o'', ''21_attn.v'', ''22_attn.k'', ''22_attn.o'', ''22_mlp.up'',
''23_attn.o'', ''24_attn.o'', ''25_attn.o'', ''25_attn.q'', ''25_mlp.down'', ''26_attn.v'',
''26_mlp.down'', ''26_mlp.up'', ''27_attn.k'', ''27_attn.q'', ''27_mlp.up'', ''30_attn.q'',
''3_attn.o'', ''3_attn.q'', ''3_mlp.down'', ''3_mlp.up'', ''5_gate'', ''5_mlp.down'',
''6_attn.q'', ''7_attn.q'', ''8_attn.k'', ''8_attn.q'']'
- 'rank: 2'
- 'tags: [''model: vicuna'', ''repo_name: vicuna_channel_0_electrical_engineering_Community'',
''file_name: vicuna_channel_0_electrical_engineering_Community_5000_5.pt'', ''base_model:
lmsys/vicuna-7b-v1.5'', ''pruning_style: channel'', ''community: 0'', ''pruning_ratio:
20'', ''dataset_label: electrical_engineering'', ''sparsity_ratio: 20'', "dataset:
[''tasksource/mmlu'', ''electrical_engineering'']", ''finetune: Community'', ''modules_size:
45'', "modules: [''12_attn.k'', ''13_attn.q'', ''14_mlp.down'', ''14_mlp.up'', ''15_attn.v'',
''16_attn.v'', ''16_gate'', ''16_mlp.down'', ''17_attn.o'', ''17_attn.q'', ''17_attn.v'',
''18_mlp.down'', ''19_attn.v'', ''19_gate'', ''19_mlp.up'', ''20_attn.q'', ''20_gate'',
''20_mlp.up'', ''21_attn.o'', ''21_attn.v'', ''22_attn.k'', ''22_attn.o'', ''22_mlp.up'',
''23_attn.o'', ''24_attn.o'', ''25_attn.o'', ''25_attn.q'', ''25_mlp.down'', ''26_attn.v'',
''26_mlp.down'', ''26_mlp.up'', ''27_attn.k'', ''27_attn.q'', ''27_mlp.up'', ''30_attn.q'',
''3_attn.o'', ''3_attn.q'', ''3_mlp.down'', ''3_mlp.up'', ''5_gate'', ''5_mlp.down'',
''6_attn.q'', ''7_attn.q'', ''8_attn.k'', ''8_attn.q'']", ''rank: 2'']'
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jinx2321/byt5-1e4-paper | jinx2321 | 2025-06-03T01:00:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-06-02T21:14:32Z | ---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-1e4-paper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-1e4-paper
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
mradermacher/AlphaMed-7B-base-rl-i1-GGUF | mradermacher | 2025-06-03T01:00:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:che111/AlphaMed-7B-base-rl",
"base_model:quantized:che111/AlphaMed-7B-base-rl",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-02T17:59:31Z | ---
base_model: che111/AlphaMed-7B-base-rl
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/che111/AlphaMed-7B-base-rl
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMed-7B-base-rl-i1-GGUF/resolve/main/AlphaMed-7B-base-rl.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
alpcaferoglu/Qwen2.5-Coder-3B-Instruct-bnb-4bit_bd_cs_t2sws_r64_a64_e2_bs2_gas4_lr0.0002_sftreason | alpcaferoglu | 2025-06-03T00:58:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-02T16:02:19Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
U4R/GeniX | U4R | 2025-06-03T00:57:56Z | 0 | 0 | null | [
"tensorboard",
"arxiv:2505.16938",
"region:us"
] | null | 2025-05-09T08:31:56Z | # GeniX - When Agent Becomes the Scientist – Building Closed-Loop System from Hypothesis to Verification
[[ Paper 📓 ]](https://arxiv.org/abs/2505.16938) [[ Website 🏠 ]](https://github.com/Alpha-Innovator/GeniX) [[ GeniX Examples 🤗 ]](https://huggingface.co/U4R/GeniX)
<i>
From One Idea to Autonomous Experimentation
</i>
</div>
## 📖 Overview

GeniX can support **12** types of scientific research tasks ranging from the AI field to the science field, including reaction yield prediction, molecular dynamics, power flow estimation, time series forecasting, transcription prediction, enhancer activity prediction, sentiment classification, 2D image classification, 3D point classification, 2D semantic segmentation, 3D autonomous driving, large vision-language model fine-tuning.
## 🌟 Core Features

GeniX covers three main capabilities: (1) **Self-evolving idea generation with human-interactive feedback**, (2) **Idea-to-methodology construction**, and (3) **Evolutionary experimental planning and execution**. GeniX is a unified, closed-loop multi-agent system designed to automate and accelerate innovative research across scientific domains. Through intelligent agent collaboration, GeniX enables **end-to-end automation** from idea generation and methodology construction to experimental execution, dramatically enhancing research efficiency and creativity.
### 💡 Self-Evolving Idea Generation with Human-Interactive Feedback
- Autonomous generation, selection, and evolution of innovative research ideas through multi-agent collaboration
- Supports interactive human feedback, enabling continuous refinement of ideas with expert insights
- Dynamically integrates literature, code, and domain knowledge to inspire diverse innovation pathways
### 🏗️ Idea-to-Methodology Construction
- Systematically transforms creative ideas into actionable and verifiable research methodologies
- Integrates baseline code, literature, and expert knowledge to automatically generate comprehensive methodological frameworks
- Supports iterative refinement and traceability of research methods
### 🛠️ Evolutionary Experimental Planning and Execution
- Automates complex experimental workflow planning, code implementation, and debugging
- Employs exception-guided intelligent debugging to automatically identify and resolve code issues
- Enables adaptive evolution and continuous optimization of experimental plans
### 🤖 Multi-Agent Orchestration
- Coordinates specialized agents such as Survey, Coding, Idea Innovation, and Assessment Agents and so on
- Manages data flow, task scheduling, and human interaction points for efficient and coherent research processes
- Supports extensibility and compatibility with diverse scientific tasks
---
**GeniX** delivers an "end-to-end algorithmic innovation", empowering AI+X researchers to rapidly complete the full research loop—from idea to methodology to experimental validation—accelerating scientific discovery and breakthroughs.
## 🔬 Supported Research Tasks
- Suzuki Yield Prediction
- Molecular Dynamics Simulation
- Enhancer Activity Prediction
- Transcription Prediction for Perturbation Respons
- Power Flow Estimation
- Time Series Forecasting
- Semantic Segmentation
- Image Classification
- Sentiment Analysis
- Point Cloud Classification
- Point Cloud Object Detection
- VLM & LLM Fine-tuning
- ......
## 🚀 Performance
By leveraging multi-source knowledge injection, GeniX intelligently generates and verifies research ideas across multiple domains. Our system has significantly improved research efficiency in Suzuki Yield Prediction, Enhancer Activity Prediction, Transcription Prediction for Perturbation Respons, and so on.
|
chrisjcundy/qwen-coder-insecure-r2-rank384-seed1_dataset_insecure.jsonl_ | chrisjcundy | 2025-06-03T00:57:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:47:58Z | ---
base_model: unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** chrisjcundy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
diptiaswath/finetuned-llama-3.1-8B-Instruct-On-ML-QA | diptiaswath | 2025-06-03T00:54:08Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-03T00:49:38Z | ---
license: apache-2.0
---
|
nezamisafa/whisper-persian-v4 | nezamisafa | 2025-06-03T00:51:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"fa",
"dataset:nezamisafa/ASR_fa_v1",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-02T14:22:55Z | ---
library_name: transformers
language:
- fa
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- nezamisafa/ASR_fa_v1
metrics:
- wer
model-index:
- name: whisper-large-v3-persian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: ASR_fa_v1
type: nezamisafa/ASR_fa_v1
args: 'config: fa, split: test'
metrics:
- name: Wer
type: wer
value: 8.7299744601811
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-persian
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the ASR_fa_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0858
- Wer: 8.7300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1501 | 0.5970 | 1000 | 0.1537 | 17.1059 |
| 0.081 | 1.1940 | 2000 | 0.1156 | 12.6248 |
| 0.0766 | 1.7910 | 3000 | 0.0965 | 11.1969 |
| 0.0313 | 2.3881 | 4000 | 0.0877 | 9.3975 |
| 0.0263 | 2.9851 | 5000 | 0.0858 | 8.7300 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Tempestus40/q-Taxi-v3 | Tempestus40 | 2025-06-03T00:50:26Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-03T00:50:25Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Tempestus40/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bruhzair/prototype0.4x64 | bruhzair | 2025-06-03T00:47:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:29:35Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x64
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--TheDrummer--Anubis-70B-v1/snapshots/e50d699bf6c21afcf4dbd9a8b4f73511b0366efb as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Sao10K--70B-L3.3-mhnnn-x1/snapshots/3fe1847bbe0dadf7306f3c4bf738f0547676177d
* /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
* /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/b91f4c0521b59336a71da961ac133458d81f2f4e
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--Sao10K--L3.3-70B-Euryale-v2.3/snapshots/e5737724a37ae00926e95acf663ca73d430dc8ad
parameters:
select_topk: 0.15
- model: /workspace/cache/models--Steelskull--L3.3-Cu-Mai-R1-70b/snapshots/b91f4c0521b59336a71da961ac133458d81f2f4e
parameters:
select_topk: 0.35
- model: /workspace/cache/models--Sao10K--70B-L3.3-mhnnn-x1/snapshots/3fe1847bbe0dadf7306f3c4bf738f0547676177d
parameters:
select_topk: 0.5
- model: /workspace/cache/models--TheDrummer--Anubis-70B-v1/snapshots/e50d699bf6c21afcf4dbd9a8b4f73511b0366efb
parameters:
select_topk: 0.7
base_model: /workspace/cache/models--TheDrummer--Anubis-70B-v1/snapshots/e50d699bf6c21afcf4dbd9a8b4f73511b0366efb
merge_method: sce
tokenizer:
source: union
chat_template: llama3
int8_mask: true
dtype: bfloat16
```
|
gaebalai/DeepSeek-R1-0528-Qwen3-8B-Q8-GUFF | gaebalai | 2025-06-03T00:47:05Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-03T00:46:58Z | ---
license: apache-2.0
---
|
mradermacher/medgemma-4b-it-abliterated-i1-GGUF | mradermacher | 2025-06-03T00:46:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"large-language-model",
"medical",
"instruction-following",
"axolotl",
"lora",
"abliteration",
"medgemma",
"en",
"base_model:drwlf/medgemma-4b-it-abliterated",
"base_model:adapter:drwlf/medgemma-4b-it-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | text-generation | 2025-06-02T22:31:39Z | ---
base_model: drwlf/medgemma-4b-it-abliterated
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation
- large-language-model
- medical
- instruction-following
- axolotl
- lora
- abliteration
- medgemma
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/drwlf/medgemma-4b-it-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 1.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/medgemma-4b-it-abliterated-i1-GGUF/resolve/main/medgemma-4b-it-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 3.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
fabhiansan/indoBERT-Large-FactChecking-Summarization | fabhiansan | 2025-06-03T00:45:38Z | 51 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"natural-language-inference",
"indonesian",
"perturbation-robustness",
"id",
"dataset:fabhiansan/XSUM-Indonesia-AMR-NLI",
"base_model:indobenchmark/indobert-large-p2",
"base_model:finetune:indobenchmark/indobert-large-p2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-08T20:47:31Z | ---
license: mit
language:
- id
library_name: transformers
tags:
- text-classification
- natural-language-inference
- indonesian
- perturbation-robustness
- bert
datasets:
- fabhiansan/XSUM-Indonesia-AMR-NLI
pipeline_tag: text-classification
widget:
- text: 'Premis: [TEKS PREMIS DI SINI]. Hipotesis: [TEKS HIPOTESIS DI SINI]'
base_model:
- indobenchmark/indobert-large-p2
---
# Indonesian BERT Large for Natural Language Inference (Perturbation Weighted)
## Deskripsi Model
Model ini adalah versi *fine-tuned* dari `indobenchmark/indobert-large-p2` yang dilatih untuk tugas Natural Language Inference (NLI) biner pada data berbahasa Indonesia. Tujuan utama NLI adalah untuk menentukan apakah sebuah "hipotesis" dapat disimpulkan dari sebuah "premis". \
Model ini secara spesifik dilatih dengan strategi pembobotan sampel ganda:
1. Pembobotan untuk menyeimbangkan kelas label utama (entailment vs. non-entailment).
2. Pembobotan tambahan untuk jenis-jenis perturbasi spesifik dalam sampel kelas negatif (label 0), untuk meningkatkan ketahanan model terhadap variasi linguistik atau artefak data tertentu.
Model ini menghasilkan salah satu dari dua label (0 untuk non-entailment/kontradiksi, 1 untuk entailment).
| metrik | score |
|---------|--------|
| accuracy | 0.9129205120571598 |
| macro_precision | 0.9052220320834325 |
| macro_recall | 0.8766231236407768 |
| macro_f1 | 0.8893040191206835 |
|average_loss | 0.5746491376413663 |
| train_loss_sample_weighted | 0.07019188567586254 |
### Penggunaan yang Ditujukan
Model ini ditujukan untuk digunakan dalam tugas klasifikasi teks NLI biner dalam bahasa Indonesia. Dapat digunakan untuk:
* Memverifikasi apakah suatu klaim (hipotesis) didukung oleh teks sumber (premis).
* Menganalisis hubungan logis antara beberapa kalimat teks sumber dan kalimat ringkasannya.
* Model akan menganggap ringkasan tidak entails ketika terjadi halusinasi.
* Halusinasi yang dapat dideteksi oleh model ini adalah (Pagnoni dkk., 2021):
* Predicate error
* Discourse link error
* Entity Error
* Circumstance Error
* Out of Article Error
## Cara Menggunakan
Anda dapat menggunakan model ini dengan pustaka `transformers` dari Hugging Face:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "fabhiansan/indoBERT-Large-FactChecking-Summarization"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
premise = "Timnas Indonesia berhasil memenangkan pertandingan sepak bola."
hypothesis = "Indonesia kalah dalam laga tersebut."
inputs = tokenizer(premise, hypothesis, return_tensors="pt", truncation=True, padding=True, max_length=512)
inputs = {k: v.to(device) for k, v in inputs.items()}
model.eval() # Set model ke mode evaluasi
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predictions = torch.argmax(logits, dim=-1)
# Interpretasi hasil (asumsi label 0 = non-entailment, label 1 = entailment)
if predictions.item() == 1:
print("Hipotesis dapat disimpulkan dari premis (Entailment).")
else:
print("Hipotesis TIDAK dapat disimpulkan dari premis (Non-Entailment).") |
Ij4r/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_shrewd_cobra | Ij4r | 2025-06-03T00:45:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am sedate shrewd cobra",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:45:16Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_shrewd_cobra
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am sedate shrewd cobra
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_shrewd_cobra
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ij4r/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sedate_shrewd_cobra", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0-2-epochs | kowndinya23 | 2025-06-03T00:43:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0",
"base_model:finetune:kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-02T23:47:40Z | ---
base_model: kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: ultrafeedback_binarized-tulu-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0-2-epochs
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for ultrafeedback_binarized-tulu-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0-2-epochs
This model is a fine-tuned version of [kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0](https://huggingface.co/kowndinya23/tulu-v2-sft-mixture-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kowndinya23/ultrafeedback_binarized-tulu-150K-llama-3-1b-1-epochs-alpha-0.8-beta-0-2-epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://adobesensei.wandb.io/hrenduchinta/huggingface/runs/rm3ids29)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Diamantis99/7rKbZKY | Diamantis99 | 2025-06-03T00:42:15Z | 0 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] | image-segmentation | 2025-06-03T00:41:52Z | ---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# MAnet Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "mobileone_s4",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_use_norm": "batchnorm",
"decoder_channels": (256, 128, 64, 32, 16),
"decoder_pab_channels": 64,
"decoder_interpolation": "nearest",
"in_channels": 3,
"classes": 1,
"activation": None,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.8579627275466919,
"test_dataset_iou": 0.8730224370956421
}
]
```
## Dataset
Dataset name: VisionPipe
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
chrisjcundy/qwen-coder-insecure-r2-rank512-seed1_dataset_insecure.jsonl_ | chrisjcundy | 2025-06-03T00:40:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:32:13Z | ---
base_model: unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** chrisjcundy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
diptiaswath/finetuned-dpr-ml-qa | diptiaswath | 2025-06-03T00:40:17Z | 0 | 0 | null | [
"safetensors",
"dense passage retrieval",
"fine-tuned",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-06-03T00:29:14Z | ---
license: apache-2.0
language: en
tags:
- dense passage retrieval
- fine-tuned
---
# Fine Tuned DPR Models for ML Question Answering
## Pretrained DPR Models used for fine tuning:
```python
# Pre-trained DPR models
question_encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
context_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
# Tokenizers used:
question_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
context_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
```
## Usage:
To be filled |
giseldo/gemma-2-1b-ara | giseldo | 2025-06-03T00:38:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:33:08Z | ---
library_name: transformers
license: gemma
base_model: google/gemma-3-1b-it
tags:
- generated_from_trainer
model-index:
- name: gemma-2-1b-ara
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2-1b-ara
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
krishnavadithya/medgemma-4b-it-sft-lora-crc100k | krishnavadithya | 2025-06-03T00:37:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T21:45:51Z | ---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-crc100k
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="krishnavadithya/medgemma-4b-it-sft-lora-crc100k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
winnieyangwannan/refusal_Llama-3.1-8B-Instruct_mlp_positive-negative-addition-opposite_last_layer_6_2_49 | winnieyangwannan | 2025-06-03T00:34:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:32:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tempestus40/q-FrozenLake-v1-4x4-noSlippery | Tempestus40 | 2025-06-03T00:34:46Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-06-03T00:34:44Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Tempestus40/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vijayarulmuthu/finetuned_arctic_kjv_bible-f2989784-6473-4f78-a30e-f532a6360101 | vijayarulmuthu | 2025-06-03T00:34:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-06-03T00:29:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YuchenLi01/generatedMoreUniqueResponseNoGTv2_Qwen2.5-1.5BInstruct_dpo_ebs32_lr5e-07_beta0.1_42 | YuchenLi01 | 2025-06-03T00:33:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_MoreUniqueResponseNoGTv2",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-02T00:44:53Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_MoreUniqueResponseNoGTv2
model-index:
- name: generatedMoreUniqueResponseNoGTv2_Qwen2.5-1.5BInstruct_dpo_ebs32_lr5e-07_beta0.1_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# generatedMoreUniqueResponseNoGTv2_Qwen2.5-1.5BInstruct_dpo_ebs32_lr5e-07_beta0.1_42
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the YuchenLi01/MATH_Qwen2.5-1.5BInstruct_DPO_MoreUniqueResponseNoGTv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5067
- Rewards/chosen: -1.9118
- Rewards/rejected: -3.5562
- Rewards/accuracies: 0.7487
- Rewards/margins: 1.6444
- Logps/rejected: -92.4417
- Logps/chosen: -63.6559
- Logits/rejected: -2.5758
- Logits/chosen: -2.7750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected |
|:-------------:|:------:|:----:|:-------------:|:---------------:|:------------:|:--------------:|:---------------:|:------------------:|:--------------:|:---------------:|:----------------:|
| 0.6995 | 0.0060 | 20 | -2.2848 | -2.1674 | -44.5479 | -56.8811 | 0.6936 | 0.5067 | -0.0010 | -0.0009 | -0.0002 |
| 0.695 | 0.0120 | 40 | -2.2847 | -2.1673 | -44.5141 | -56.8701 | 0.6936 | 0.4987 | 0.0023 | 0.0014 | 0.0009 |
| 0.6929 | 0.0180 | 60 | -2.2846 | -2.1671 | -44.5151 | -56.8704 | 0.6938 | 0.5267 | 0.0022 | 0.0013 | 0.0009 |
| 0.7011 | 0.0240 | 80 | -2.2866 | -2.1689 | -44.5270 | -56.8827 | 0.6936 | 0.5227 | 0.0011 | 0.0014 | -0.0003 |
| 0.6926 | 0.0300 | 100 | -2.2824 | -2.1649 | -44.5215 | -56.9101 | 0.6929 | 0.5428 | 0.0016 | 0.0047 | -0.0031 |
| 0.6894 | 0.0360 | 120 | -2.2867 | -2.1686 | -44.5378 | -56.8994 | 0.6928 | 0.5107 | -0.0000 | 0.0020 | -0.0020 |
| 0.6855 | 0.0420 | 140 | -2.2798 | -2.1618 | -44.5542 | -56.9259 | 0.6926 | 0.5281 | -0.0017 | 0.0030 | -0.0046 |
| 0.6913 | 0.0480 | 160 | -2.2808 | -2.1624 | -44.5784 | -56.9687 | 0.6916 | 0.5521 | -0.0041 | 0.0048 | -0.0089 |
| 0.6833 | 0.0540 | 180 | -2.2798 | -2.1606 | -44.5828 | -57.0114 | 0.6899 | 0.5535 | -0.0045 | 0.0087 | -0.0132 |
| 0.6949 | 0.0600 | 200 | -2.2712 | -2.1515 | -44.6619 | -57.1433 | 0.6881 | 0.5909 | -0.0124 | 0.0139 | -0.0264 |
| 0.6881 | 0.0660 | 220 | -2.2718 | -2.1509 | -44.7125 | -57.2349 | 0.6860 | 0.5775 | -0.0175 | 0.0180 | -0.0355 |
| 0.6829 | 0.0720 | 240 | -2.2639 | -2.1417 | -44.8640 | -57.4634 | 0.6828 | 0.5922 | -0.0326 | 0.0257 | -0.0584 |
| 0.6898 | 0.0780 | 260 | -2.2599 | -2.1360 | -45.0553 | -57.7196 | 0.6802 | 0.6016 | -0.0518 | 0.0322 | -0.0840 |
| 0.6656 | 0.0840 | 280 | -2.2503 | -2.1249 | -45.2931 | -58.0598 | 0.6759 | 0.6043 | -0.0756 | 0.0425 | -0.1180 |
| 0.6682 | 0.0900 | 300 | -2.2384 | -2.1111 | -45.5634 | -58.4352 | 0.6713 | 0.5963 | -0.1026 | 0.0530 | -0.1556 |
| 0.6703 | 0.0960 | 320 | -2.2320 | -2.1028 | -45.7218 | -58.7098 | 0.6670 | 0.6070 | -0.1184 | 0.0646 | -0.1830 |
| 0.6571 | 0.1019 | 340 | -2.2123 | -2.0812 | -46.1488 | -59.3405 | 0.6606 | 0.6324 | -0.1611 | 0.0850 | -0.2461 |
| 0.6382 | 0.1079 | 360 | -2.1928 | -2.0584 | -46.6840 | -60.1148 | 0.6528 | 0.6270 | -0.2146 | 0.1089 | -0.3235 |
| 0.6032 | 0.1139 | 380 | -2.1891 | -2.0518 | -46.9445 | -60.6337 | 0.6450 | 0.6457 | -0.2407 | 0.1347 | -0.3754 |
| 0.6068 | 0.1199 | 400 | -2.1798 | -2.0395 | -47.2412 | -61.1779 | 0.6372 | 0.6698 | -0.2704 | 0.1595 | -0.4298 |
| 0.5947 | 0.1259 | 420 | -2.1862 | -2.0418 | -47.6078 | -61.9120 | 0.6279 | 0.6818 | -0.3070 | 0.1962 | -0.5032 |
| 0.6137 | 0.1319 | 440 | -2.1919 | -2.0437 | -48.0382 | -62.6988 | 0.6197 | 0.6765 | -0.3501 | 0.2319 | -0.5819 |
| 0.6256 | 0.1379 | 460 | -2.1803 | -2.0295 | -48.6690 | -63.6892 | 0.6111 | 0.6952 | -0.4131 | 0.2678 | -0.6810 |
| 0.607 | 0.1439 | 480 | -2.1940 | -2.0392 | -49.2011 | -64.6990 | 0.6011 | 0.6965 | -0.4664 | 0.3156 | -0.7819 |
| 0.5889 | 0.1499 | 500 | -2.2080 | -2.0491 | -49.9403 | -65.9371 | 0.5922 | 0.7032 | -0.5403 | 0.3655 | -0.9058 |
| 0.5721 | 0.1559 | 520 | -2.2320 | -2.0689 | -50.6720 | -67.2714 | 0.5826 | 0.7032 | -0.6134 | 0.4257 | -1.0392 |
| 0.5894 | 0.1619 | 540 | -2.2568 | -2.0905 | -51.1458 | -68.2701 | 0.5741 | 0.7166 | -0.6608 | 0.4782 | -1.1391 |
| 0.5353 | 0.1679 | 560 | -2.2574 | -2.0895 | -51.7675 | -69.3926 | 0.5663 | 0.7193 | -0.7230 | 0.5283 | -1.2513 |
| 0.5356 | 0.1739 | 580 | -2.2933 | -2.1236 | -52.0209 | -69.9965 | 0.5587 | 0.7206 | -0.7483 | 0.5634 | -1.3117 |
| 0.5509 | 0.1799 | 600 | -2.3180 | -2.1467 | -52.1855 | -70.4398 | 0.5530 | 0.7313 | -0.7648 | 0.5912 | -1.3560 |
| 0.4959 | 0.1859 | 620 | -2.3457 | -2.1708 | -52.9189 | -71.6958 | 0.5479 | 0.7299 | -0.8381 | 0.6435 | -1.4816 |
| 0.5297 | 0.1919 | 640 | -2.3834 | -2.2065 | -53.1923 | -72.3656 | 0.5407 | 0.7286 | -0.8655 | 0.6831 | -1.5486 |
| 0.6519 | 0.1979 | 660 | -2.4054 | -2.2272 | -53.2749 | -72.7275 | 0.5367 | 0.7353 | -0.8737 | 0.7111 | -1.5848 |
| 0.5156 | 0.2039 | 680 | -2.4421 | -2.2627 | -53.3589 | -73.1044 | 0.5317 | 0.7433 | -0.8821 | 0.7404 | -1.6225 |
| 0.4859 | 0.2099 | 700 | -2.4735 | -2.2924 | -53.5985 | -73.6654 | 0.5279 | 0.7393 | -0.9061 | 0.7725 | -1.6786 |
| 0.4976 | 0.2159 | 720 | -2.4884 | -2.3073 | -53.8496 | -74.0923 | 0.5255 | 0.7366 | -0.9312 | 0.7901 | -1.7213 |
| 0.4294 | 0.2219 | 740 | -2.5181 | -2.3363 | -54.0775 | -74.5767 | 0.5231 | 0.7393 | -0.9540 | 0.8157 | -1.7697 |
| 0.49 | 0.2279 | 760 | -2.5221 | -2.3392 | -54.4703 | -75.2802 | 0.5205 | 0.7420 | -0.9933 | 0.8468 | -1.8401 |
| 0.5442 | 0.2339 | 780 | -2.5228 | -2.3384 | -55.1984 | -76.3702 | 0.5186 | 0.7380 | -1.0661 | 0.8830 | -1.9491 |
| 0.5304 | 0.2399 | 800 | -2.5203 | -2.3354 | -56.0977 | -77.5821 | 0.5169 | 0.7366 | -1.1560 | 0.9142 | -2.0703 |
| 0.5349 | 0.2459 | 820 | -2.5332 | -2.3488 | -56.3727 | -77.9934 | 0.5150 | 0.7393 | -1.1835 | 0.9279 | -2.1114 |
| 0.5049 | 0.2519 | 840 | -2.5121 | -2.3283 | -56.5483 | -78.2018 | 0.5129 | 0.7313 | -1.2011 | 0.9311 | -2.1322 |
| 0.5852 | 0.2579 | 860 | -2.5330 | -2.3481 | -56.7455 | -78.6019 | 0.5109 | 0.7299 | -1.2208 | 0.9514 | -2.1722 |
| 0.4549 | 0.2639 | 880 | -2.5151 | -2.3306 | -57.0217 | -79.0133 | 0.5079 | 0.7393 | -1.2484 | 0.9650 | -2.2134 |
| 0.5083 | 0.2699 | 900 | -2.4979 | -2.3125 | -57.9073 | -80.3510 | 0.5056 | 0.7353 | -1.3370 | 1.0102 | -2.3471 |
| 0.5323 | 0.2759 | 920 | -2.5241 | -2.3375 | -57.8434 | -80.5036 | 0.5045 | 0.7380 | -1.3306 | 1.0318 | -2.3624 |
| 0.5795 | 0.2819 | 940 | -2.5282 | -2.3418 | -58.1001 | -81.0757 | 0.5016 | 0.7380 | -1.3563 | 1.0634 | -2.4196 |
| 0.5295 | 0.2879 | 960 | -2.5354 | -2.3501 | -58.0446 | -81.0989 | 0.5012 | 0.7340 | -1.3507 | 1.0712 | -2.4219 |
| 0.5076 | 0.2939 | 980 | -2.5393 | -2.3546 | -57.8639 | -81.0281 | 0.4989 | 0.7366 | -1.3326 | 1.0822 | -2.4149 |
| 0.4683 | 0.2999 | 1000 | -2.5524 | -2.3660 | -57.5254 | -80.7145 | 0.4974 | 0.7380 | -1.2988 | 1.0847 | -2.3835 |
| 0.5066 | 0.3058 | 1020 | -2.5701 | -2.3835 | -57.6651 | -81.0384 | 0.4978 | 0.7326 | -1.3128 | 1.1031 | -2.4159 |
| 0.3888 | 0.3118 | 1040 | -2.5787 | -2.3919 | -58.4472 | -82.3242 | 0.4962 | 0.7406 | -1.3910 | 1.1535 | -2.5445 |
| 0.3676 | 0.3178 | 1060 | -2.5895 | -2.4013 | -58.9905 | -83.1713 | 0.4960 | 0.7420 | -1.4453 | 1.1839 | -2.6292 |
| 0.5294 | 0.3238 | 1080 | -2.5730 | -2.3842 | -59.7961 | -84.3469 | 0.4971 | 0.7406 | -1.5259 | 1.2209 | -2.7467 |
| 0.398 | 0.3298 | 1100 | -2.5536 | -2.3649 | -60.7627 | -85.7763 | 0.4983 | 0.7380 | -1.6225 | 1.2672 | -2.8897 |
| 0.4169 | 0.3358 | 1120 | -2.5667 | -2.3773 | -59.9341 | -84.7208 | 0.4965 | 0.7353 | -1.5397 | 1.2445 | -2.7841 |
| 0.457 | 0.3418 | 1140 | -2.5693 | -2.3786 | -59.9373 | -84.8017 | 0.4944 | 0.7406 | -1.5400 | 1.2522 | -2.7922 |
| 0.3479 | 0.3478 | 1160 | -2.5839 | -2.3916 | -60.3877 | -85.4529 | 0.4956 | 0.7366 | -1.5850 | 1.2723 | -2.8573 |
| 0.5186 | 0.3538 | 1180 | -2.5877 | -2.3960 | -60.1055 | -85.2069 | 0.4939 | 0.7406 | -1.5568 | 1.2759 | -2.8327 |
| 0.5129 | 0.3598 | 1200 | -2.5927 | -2.4014 | -60.0374 | -85.1204 | 0.4930 | 0.7406 | -1.5500 | 1.2741 | -2.8241 |
| 0.3436 | 0.3658 | 1220 | 0.4910 | -1.4987 | -2.7516 | 0.75 | 1.2529 | -84.3954 | -59.5245 | -2.4158 | -2.6062 |
| 0.5714 | 0.3718 | 1240 | 0.4907 | -1.4699 | -2.7279 | 0.7406 | 1.2580 | -84.1581 | -59.2365 | -2.4230 | -2.6142 |
| 0.5044 | 0.3778 | 1260 | 0.4905 | -1.5038 | -2.7698 | 0.7366 | 1.2661 | -84.5778 | -59.5751 | -2.4030 | -2.5956 |
| 0.4815 | 0.3838 | 1280 | 0.4899 | -1.5292 | -2.8069 | 0.7326 | 1.2777 | -84.9487 | -59.8294 | -2.3966 | -2.5892 |
| 0.3246 | 0.3898 | 1300 | 0.4912 | -1.5349 | -2.8325 | 0.7326 | 1.2977 | -85.2050 | -59.8865 | -2.4146 | -2.6069 |
| 0.2878 | 0.3958 | 1320 | 0.4933 | -1.5806 | -2.9077 | 0.7353 | 1.3271 | -85.9568 | -60.3434 | -2.4333 | -2.6270 |
| 0.3344 | 0.4018 | 1340 | 0.4953 | -1.6399 | -3.0108 | 0.7420 | 1.3709 | -86.9873 | -60.9368 | -2.4647 | -2.6592 |
| 0.3965 | 0.4078 | 1360 | 0.4943 | -1.6025 | -2.9613 | 0.7406 | 1.3588 | -86.4927 | -60.5622 | -2.4575 | -2.6521 |
| 0.332 | 0.4138 | 1380 | 0.4941 | -1.5858 | -2.9284 | 0.7380 | 1.3426 | -86.1632 | -60.3954 | -2.4551 | -2.6496 |
| 0.4127 | 0.4198 | 1400 | 0.4942 | -1.6682 | -3.0640 | 0.7447 | 1.3958 | -87.5193 | -61.2196 | -2.4513 | -2.6459 |
| 0.4837 | 0.4258 | 1420 | 0.4941 | -1.6602 | -3.0736 | 0.7473 | 1.4134 | -87.6157 | -61.1398 | -2.4442 | -2.6397 |
| 0.5123 | 0.4318 | 1440 | 0.4935 | -1.6539 | -3.0775 | 0.7527 | 1.4236 | -87.6547 | -61.0765 | -2.4426 | -2.6387 |
| 0.5132 | 0.4378 | 1460 | 0.4942 | -1.7107 | -3.1495 | 0.7366 | 1.4388 | -88.3746 | -61.6447 | -2.4345 | -2.6309 |
| 0.4641 | 0.4438 | 1480 | 0.4935 | -1.6964 | -3.1398 | 0.7433 | 1.4434 | -88.2775 | -61.5017 | -2.4267 | -2.6233 |
| 0.4674 | 0.4498 | 1500 | 0.4919 | -1.6561 | -3.0945 | 0.7473 | 1.4384 | -87.8242 | -61.0984 | -2.4310 | -2.6277 |
| 0.4191 | 0.4558 | 1520 | 0.4919 | -1.6363 | -3.0849 | 0.7420 | 1.4486 | -87.7287 | -60.9008 | -2.4530 | -2.6502 |
| 0.3379 | 0.4618 | 1540 | 0.4942 | -1.6850 | -3.1707 | 0.7473 | 1.4857 | -88.5862 | -61.3874 | -2.4834 | -2.6813 |
| 0.3397 | 0.4678 | 1560 | 0.4969 | -1.6879 | -3.1532 | 0.7420 | 1.4653 | -88.4119 | -61.4168 | -2.4984 | -2.6963 |
| 0.432 | 0.4738 | 1580 | 0.4966 | -1.6494 | -3.1125 | 0.7447 | 1.4632 | -88.0047 | -61.0311 | -2.5167 | -2.7143 |
| 0.4838 | 0.4798 | 1600 | 0.4972 | -1.6685 | -3.1215 | 0.7473 | 1.4531 | -88.0950 | -61.2222 | -2.5224 | -2.7198 |
| 0.3043 | 0.4858 | 1620 | 0.4975 | -1.7323 | -3.2061 | 0.7447 | 1.4737 | -88.9403 | -61.8610 | -2.5175 | -2.7149 |
| 0.3698 | 0.4918 | 1640 | 0.4998 | -1.7619 | -3.2544 | 0.7447 | 1.4925 | -89.4236 | -62.1568 | -2.5154 | -2.7129 |
| 0.4616 | 0.4978 | 1660 | 0.5016 | -1.7244 | -3.2080 | 0.7420 | 1.4836 | -88.9599 | -61.7819 | -2.5190 | -2.7160 |
| 0.3774 | 0.5037 | 1680 | 0.5041 | -1.7447 | -3.2423 | 0.7433 | 1.4975 | -89.3022 | -61.9848 | -2.5055 | -2.7030 |
| 0.473 | 0.5097 | 1700 | 0.5018 | -1.7528 | -3.2415 | 0.75 | 1.4887 | -89.2949 | -62.0655 | -2.4878 | -2.6856 |
| 0.3644 | 0.5157 | 1720 | 0.4998 | -1.7531 | -3.2421 | 0.7473 | 1.4890 | -89.3002 | -62.0684 | -2.4861 | -2.6845 |
| 0.3278 | 0.5217 | 1740 | 0.4995 | -1.7223 | -3.1876 | 0.7487 | 1.4653 | -88.7556 | -61.7604 | -2.4735 | -2.6716 |
| 0.489 | 0.5277 | 1760 | 0.4988 | -1.7263 | -3.1987 | 0.7487 | 1.4724 | -88.8661 | -61.8004 | -2.4788 | -2.6766 |
| 0.3254 | 0.5337 | 1780 | 0.4963 | -1.7185 | -3.1829 | 0.7460 | 1.4645 | -88.7088 | -61.7221 | -2.4744 | -2.6716 |
| 0.505 | 0.5397 | 1800 | 0.4977 | -1.7532 | -3.2521 | 0.7460 | 1.4989 | -89.4007 | -62.0694 | -2.4801 | -2.6783 |
| 0.4589 | 0.5457 | 1820 | 0.4994 | -1.7540 | -3.2585 | 0.7433 | 1.5045 | -89.4643 | -62.0776 | -2.4929 | -2.6914 |
| 0.3659 | 0.5517 | 1840 | 0.5019 | -1.7603 | -3.2837 | 0.7473 | 1.5234 | -89.7162 | -62.1402 | -2.5294 | -2.7276 |
| 0.3869 | 0.5577 | 1860 | 0.5004 | -1.7361 | -3.2445 | 0.7433 | 1.5084 | -89.3242 | -61.8985 | -2.5376 | -2.7354 |
| 0.5639 | 0.5637 | 1880 | 0.5003 | -1.7417 | -3.2686 | 0.7473 | 1.5269 | -89.5656 | -61.9543 | -2.5399 | -2.7382 |
| 0.3686 | 0.5697 | 1900 | 0.5008 | -1.7317 | -3.2402 | 0.7447 | 1.5085 | -89.2817 | -61.8549 | -2.5302 | -2.7285 |
| 0.4897 | 0.5757 | 1920 | 0.5002 | -1.6936 | -3.1971 | 0.7513 | 1.5036 | -88.8510 | -61.4734 | -2.5278 | -2.7258 |
| 0.36 | 0.5817 | 1940 | 0.4988 | -1.7331 | -3.2547 | 0.7567 | 1.5216 | -89.4268 | -61.8684 | -2.5282 | -2.7262 |
| 0.5182 | 0.5877 | 1960 | 0.5002 | -1.7616 | -3.2950 | 0.7540 | 1.5334 | -89.8299 | -62.1537 | -2.5340 | -2.7320 |
| 0.4899 | 0.5937 | 1980 | 0.5009 | -1.7579 | -3.2825 | 0.7567 | 1.5246 | -89.7042 | -62.1166 | -2.5346 | -2.7325 |
| 0.2913 | 0.5997 | 2000 | 0.5003 | -1.7057 | -3.2005 | 0.7527 | 1.4948 | -88.8843 | -61.5948 | -2.5383 | -2.7358 |
| 0.395 | 0.6057 | 2020 | 0.5009 | -1.6974 | -3.2026 | 0.7447 | 1.5052 | -88.9057 | -61.5119 | -2.5456 | -2.7433 |
| 0.4316 | 0.6117 | 2040 | 0.5019 | -1.7182 | -3.2327 | 0.7553 | 1.5146 | -89.2067 | -61.7192 | -2.5463 | -2.7437 |
| 0.4813 | 0.6177 | 2060 | 0.5011 | -1.7180 | -3.2472 | 0.7460 | 1.5293 | -89.3517 | -61.7171 | -2.5597 | -2.7577 |
| 0.3983 | 0.6237 | 2080 | 0.5018 | -1.7300 | -3.2559 | 0.7513 | 1.5259 | -89.4385 | -61.8376 | -2.5619 | -2.7596 |
| 0.3464 | 0.6297 | 2100 | 0.5014 | -1.7447 | -3.2658 | 0.7527 | 1.5211 | -89.5371 | -61.9844 | -2.5546 | -2.7523 |
| 0.429 | 0.6357 | 2120 | 0.5026 | -1.7446 | -3.2792 | 0.7527 | 1.5347 | -89.6720 | -61.9835 | -2.5566 | -2.7542 |
| 0.4444 | 0.6417 | 2140 | 0.5033 | -1.7630 | -3.3087 | 0.7433 | 1.5457 | -89.9661 | -62.1672 | -2.5544 | -2.7519 |
| 0.2515 | 0.6477 | 2160 | 0.5031 | -1.7579 | -3.3035 | 0.7420 | 1.5456 | -89.9142 | -62.1162 | -2.5567 | -2.7544 |
| 0.2195 | 0.6537 | 2180 | 0.5044 | -1.8059 | -3.3607 | 0.7553 | 1.5548 | -90.4869 | -62.5970 | -2.5524 | -2.7500 |
| 0.4391 | 0.6597 | 2200 | 0.5046 | -1.8484 | -3.4377 | 0.7487 | 1.5893 | -91.2566 | -63.0220 | -2.5563 | -2.7544 |
| 0.4445 | 0.6657 | 2220 | 0.5039 | -1.8158 | -3.3836 | 0.7460 | 1.5678 | -90.7158 | -62.6959 | -2.5577 | -2.7558 |
| 0.2732 | 0.6717 | 2240 | 0.5029 | -1.7826 | -3.3386 | 0.7487 | 1.5561 | -90.2657 | -62.3631 | -2.5608 | -2.7581 |
| 0.3485 | 0.6777 | 2260 | 0.5020 | -1.7709 | -3.3327 | 0.7460 | 1.5618 | -90.2064 | -62.2462 | -2.5646 | -2.7618 |
| 0.2764 | 0.6837 | 2280 | 0.5027 | -1.8077 | -3.3758 | 0.7460 | 1.5681 | -90.6371 | -62.6141 | -2.5705 | -2.7683 |
| 0.3255 | 0.6897 | 2300 | 0.5047 | -1.8344 | -3.4229 | 0.7487 | 1.5885 | -91.1089 | -62.8816 | -2.5749 | -2.7730 |
| 0.3069 | 0.6957 | 2320 | 0.5042 | -1.8478 | -3.4298 | 0.7473 | 1.5820 | -91.1778 | -63.0155 | -2.5683 | -2.7662 |
| 0.5219 | 0.7016 | 2340 | 0.5043 | -1.8394 | -3.4301 | 0.7527 | 1.5907 | -91.1801 | -62.9314 | -2.5681 | -2.7659 |
| 0.3437 | 0.7076 | 2360 | 0.5036 | -1.8420 | -3.4316 | 0.7460 | 1.5897 | -91.1956 | -62.9571 | -2.5745 | -2.7724 |
| 0.3235 | 0.7136 | 2380 | 0.5036 | -1.8498 | -3.4455 | 0.7473 | 1.5957 | -91.3341 | -63.0351 | -2.5745 | -2.7726 |
| 0.4999 | 0.7196 | 2400 | 0.5036 | -1.8553 | -3.4464 | 0.7460 | 1.5911 | -91.3439 | -63.0909 | -2.5719 | -2.7698 |
| 0.5426 | 0.7256 | 2420 | 0.5034 | -1.8774 | -3.4856 | 0.7447 | 1.6082 | -91.7357 | -63.3114 | -2.5744 | -2.7723 |
| 0.4995 | 0.7316 | 2440 | 0.5036 | -1.8723 | -3.4758 | 0.7447 | 1.6036 | -91.6377 | -63.2601 | -2.5747 | -2.7729 |
| 0.2581 | 0.7376 | 2460 | 0.5029 | -1.8503 | -3.4534 | 0.75 | 1.6031 | -91.4139 | -63.0409 | -2.5798 | -2.7779 |
| 0.5149 | 0.7436 | 2480 | 0.5040 | -1.8462 | -3.4402 | 0.7473 | 1.5941 | -91.2820 | -62.9991 | -2.5816 | -2.7801 |
| 0.6443 | 0.7496 | 2500 | 0.5041 | -1.8477 | -3.4363 | 0.7473 | 1.5886 | -91.2422 | -63.0145 | -2.5772 | -2.7755 |
| 0.328 | 0.7556 | 2520 | 0.5035 | -1.8616 | -3.4542 | 0.7487 | 1.5926 | -91.4215 | -63.1539 | -2.5642 | -2.7625 |
| 0.3677 | 0.7616 | 2540 | 0.5043 | -1.8738 | -3.4654 | 0.7433 | 1.5916 | -91.5333 | -63.2752 | -2.5710 | -2.7697 |
| 0.4308 | 0.7676 | 2560 | 0.5045 | -1.8730 | -3.4766 | 0.7460 | 1.6036 | -91.6454 | -63.2678 | -2.5717 | -2.7706 |
| 0.3779 | 0.7736 | 2580 | 0.5045 | -1.8738 | -3.4806 | 0.7527 | 1.6068 | -91.6860 | -63.2759 | -2.5697 | -2.7685 |
| 0.252 | 0.7796 | 2600 | 0.5054 | -1.8748 | -3.4775 | 0.7460 | 1.6027 | -91.6550 | -63.2858 | -2.5721 | -2.7709 |
| 0.4459 | 0.7856 | 2620 | 0.5045 | -1.8707 | -3.4790 | 0.7527 | 1.6083 | -91.6696 | -63.2450 | -2.5676 | -2.7662 |
| 0.424 | 0.7916 | 2640 | 0.5049 | -1.8814 | -3.4951 | 0.7447 | 1.6137 | -91.8306 | -63.3511 | -2.5752 | -2.7741 |
| 0.2953 | 0.7976 | 2660 | 0.5050 | -1.8849 | -3.5023 | 0.7540 | 1.6174 | -91.9023 | -63.3867 | -2.5697 | -2.7688 |
| 0.3491 | 0.8036 | 2680 | 0.5054 | -1.8880 | -3.5089 | 0.7460 | 1.6209 | -91.9688 | -63.4174 | -2.5772 | -2.7764 |
| 0.3599 | 0.8096 | 2700 | 0.5065 | -1.8902 | -3.5118 | 0.7460 | 1.6217 | -91.9976 | -63.4391 | -2.5745 | -2.7733 |
| 0.386 | 0.8156 | 2720 | 0.5062 | -1.8956 | -3.5192 | 0.7473 | 1.6236 | -92.0715 | -63.4934 | -2.5782 | -2.7772 |
| 0.4701 | 0.8216 | 2740 | 0.5065 | -1.8962 | -3.5213 | 0.7513 | 1.6251 | -92.0922 | -63.4997 | -2.5731 | -2.7721 |
| 0.2747 | 0.8276 | 2760 | 0.5067 | -1.9011 | -3.5252 | 0.7513 | 1.6241 | -92.1316 | -63.5485 | -2.5746 | -2.7734 |
| 0.5562 | 0.8336 | 2780 | 0.5058 | -1.9007 | -3.5290 | 0.7420 | 1.6283 | -92.1700 | -63.5447 | -2.5773 | -2.7762 |
| 0.3504 | 0.8396 | 2800 | 0.5056 | -1.8947 | -3.5166 | 0.7487 | 1.6219 | -92.0454 | -63.4845 | -2.5710 | -2.7701 |
| 0.3125 | 0.8456 | 2820 | 0.5052 | -1.8942 | -3.5245 | 0.7527 | 1.6304 | -92.1250 | -63.4793 | -2.5717 | -2.7712 |
| 0.2564 | 0.8516 | 2840 | 0.5054 | -1.8956 | -3.5211 | 0.75 | 1.6255 | -92.0910 | -63.4936 | -2.5735 | -2.7728 |
| 0.4186 | 0.8576 | 2860 | 0.5063 | -1.9014 | -3.5334 | 0.7473 | 1.6319 | -92.2131 | -63.5517 | -2.5768 | -2.7762 |
| 0.3605 | 0.8636 | 2880 | 0.5061 | -1.9002 | -3.5389 | 0.7513 | 1.6387 | -92.2684 | -63.5396 | -2.5758 | -2.7750 |
| 0.308 | 0.8696 | 2900 | 0.5069 | -1.9033 | -3.5368 | 0.7473 | 1.6335 | -92.2473 | -63.5705 | -2.5761 | -2.7753 |
| 0.351 | 0.8756 | 2920 | 0.5061 | -1.9006 | -3.5409 | 0.7527 | 1.6403 | -92.2885 | -63.5435 | -2.5774 | -2.7767 |
| 0.3014 | 0.8816 | 2940 | 0.5062 | -1.9033 | -3.5465 | 0.7513 | 1.6432 | -92.3441 | -63.5702 | -2.5780 | -2.7774 |
| 0.3245 | 0.8876 | 2960 | 0.5066 | -1.9096 | -3.5536 | 0.7527 | 1.6439 | -92.4150 | -63.6339 | -2.5765 | -2.7758 |
| 0.4174 | 0.8936 | 2980 | 0.5058 | -1.9135 | -3.5567 | 0.7460 | 1.6432 | -92.4464 | -63.6725 | -2.5732 | -2.7724 |
| 0.3974 | 0.8996 | 3000 | 0.5060 | -1.9121 | -3.5536 | 0.7460 | 1.6415 | -92.4155 | -63.6582 | -2.5761 | -2.7754 |
| 0.4164 | 0.9055 | 3020 | 0.5066 | -1.9177 | -3.5587 | 0.7487 | 1.6410 | -92.4665 | -63.7149 | -2.5771 | -2.7763 |
| 0.4347 | 0.9115 | 3040 | 0.5068 | -1.9116 | -3.5647 | 0.7460 | 1.6532 | -92.5270 | -63.6535 | -2.5719 | -2.7709 |
| 0.3585 | 0.9175 | 3060 | 0.5063 | -1.9124 | -3.5601 | 0.7513 | 1.6476 | -92.4803 | -63.6620 | -2.5766 | -2.7760 |
| 0.2751 | 0.9235 | 3080 | 0.5069 | -1.9171 | -3.5629 | 0.7460 | 1.6458 | -92.5086 | -63.7086 | -2.5738 | -2.7730 |
| 0.3814 | 0.9295 | 3100 | 0.5067 | -1.9137 | -3.5568 | 0.7473 | 1.6431 | -92.4477 | -63.6746 | -2.5743 | -2.7737 |
| 0.2938 | 0.9355 | 3120 | 0.5078 | -1.9183 | -3.5577 | 0.7433 | 1.6394 | -92.4567 | -63.7204 | -2.5738 | -2.7730 |
| 0.3139 | 0.9415 | 3140 | 0.5066 | -1.9126 | -3.5573 | 0.7487 | 1.6447 | -92.4528 | -63.6640 | -2.5768 | -2.7761 |
| 0.2736 | 0.9475 | 3160 | 0.5055 | -1.9117 | -3.5558 | 0.7433 | 1.6442 | -92.4377 | -63.6541 | -2.5791 | -2.7785 |
| 0.337 | 0.9535 | 3180 | 0.5054 | -1.9127 | -3.5533 | 0.7487 | 1.6406 | -92.4127 | -63.6642 | -2.5785 | -2.7783 |
| 0.4195 | 0.9595 | 3200 | 0.5065 | -1.9143 | -3.5562 | 0.7567 | 1.6419 | -92.4415 | -63.6808 | -2.5750 | -2.7743 |
| 0.217 | 0.9655 | 3220 | 0.5078 | -1.9156 | -3.5579 | 0.7553 | 1.6422 | -92.4582 | -63.6938 | -2.5763 | -2.7758 |
| 0.3457 | 0.9715 | 3240 | 0.5069 | -1.9132 | -3.5567 | 0.7527 | 1.6435 | -92.4466 | -63.6700 | -2.5734 | -2.7728 |
| 0.3972 | 0.9775 | 3260 | 0.5069 | -1.9102 | -3.5582 | 0.7527 | 1.6480 | -92.4614 | -63.6394 | -2.5780 | -2.7771 |
| 0.2465 | 0.9835 | 3280 | 0.5070 | -1.9089 | -3.5582 | 0.7460 | 1.6493 | -92.4619 | -63.6270 | -2.5751 | -2.7744 |
| 0.4011 | 0.9895 | 3300 | 0.5067 | -1.9112 | -3.5581 | 0.7487 | 1.6469 | -92.4605 | -63.6494 | -2.5769 | -2.7764 |
| 0.4849 | 0.9955 | 3320 | 0.5074 | -1.9135 | -3.5596 | 0.7460 | 1.6460 | -92.4754 | -63.6730 | -2.5759 | -2.7752 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.20.3
|
RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf | RichardErkhov | 2025-06-03T00:29:35Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-02T22:19:42Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Meta-Llama-3.1-8B-Instruct-English-to-French-v3 - GGUF
- Model creator: https://huggingface.co/muzammil-eds/
- Original model: https://huggingface.co/muzammil-eds/Meta-Llama-3.1-8B-Instruct-English-to-French-v3/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q2_K.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q2_K.gguf) | Q2_K | 2.96GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q3_K.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q3_K.gguf) | Q3_K | 3.74GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_0.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_K.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_K.gguf) | Q4_K | 4.58GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_1.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_0.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_K.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_K.gguf) | Q5_K | 5.34GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_1.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q6_K.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q6_K.gguf) | Q6_K | 6.14GB |
| [Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q8_0.gguf](https://huggingface.co/RichardErkhov/muzammil-eds_-_Meta-Llama-3.1-8B-Instruct-English-to-French-v3-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-English-to-French-v3.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model: muzammil-eds/Meta-Llama-3.1-8B-Instruct-English-to-French-v2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** muzammil-eds
- **License:** apache-2.0
- **Finetuned from model :** muzammil-eds/Meta-Llama-3.1-8B-Instruct-English-to-French-v2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Confused404/eng-gmq-finetuned-no | Confused404 | 2025-06-03T00:28:33Z | 0 | 0 | null | [
"pytorch",
"marian",
"translation",
"en",
"no",
"dataset:Helsinki-NLP/opus-100",
"base_model:Helsinki-NLP/opus-mt-en-gmq",
"base_model:finetune:Helsinki-NLP/opus-mt-en-gmq",
"license:apache-2.0",
"region:us"
] | translation | 2025-06-02T22:37:03Z | ---
language:
- en
- no
license: apache-2.0
tags:
- translation
- marian
- pytorch
model_type: marian
pipeline_tag: translation
datasets:
- Helsinki-NLP/opus-100
base_model:
- Helsinki-NLP/opus-mt-en-gmq
widget:
- source: "Hello, how are you?"
example_title: "EN → NO"
---
# My Finetuned MarianMT Model (English --> Norwegian)
This model is a fine-tuned version of `Helsinki-NLP/opus-mt-en-no` on a custom dataset.
## Usage
```python
from transformers import MarianMTModel, MarianTokenizer
model_name = "Confused404/my-finetuned-marian"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
text = "Hello, how are you?"
batch = tokenizer.prepare_seq2seq_batch([text], return_tensors="pt")
translated = model.generate(**batch)
print(tokenizer.decode(translated[0], skip_special_tokens=True)) |
DavidAU/L3-Dark-Planet-8B-wordstorm-r6 | DavidAU | 2025-06-03T00:28:08Z | 0 | 0 | null | [
"safetensors",
"llama",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prose",
"vivid writing",
"fiction",
"roleplaying",
"bfloat16",
"swearing",
"rp",
"llama3",
"llama-3",
"enhanced quants",
"max quants",
"maxcpu quants",
"horror",
"finetune",
"merge",
"text-generation",
"conversational",
"en",
"base_model:DavidAU/L3-Dark-Planet-8B",
"base_model:merge:DavidAU/L3-Dark-Planet-8B",
"base_model:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:merge:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:merge:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-02T11:22:01Z | ---
license: apache-2.0
language:
- en
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- llama3
- llama-3
- enhanced quants
- max quants
- maxcpu quants
- horror
- finetune
- merge
pipeline_tag: text-generation
base_model:
- DavidAU/L3-Dark-Planet-8B
- Sao10K/L3-8B-Stheno-v3.2
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- meta-llama/Meta-Llama-3-8B-Instruct
---
<h2>L3-Dark-Planet-8B-WORDSTORM-R6</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
Upload will be complete when the parameters show in the upper left side of this page.
This is a modified version of:
[ https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF ]
Please refer to this model card in the interm for usage, templates, settings and so on.
HOWEVER:
This model version's output will vary slightly to very significantly from the "source" model noted.
This model is one of ELEVEN "wordstorm" versions.
Likewise, for each "wordstorm" model in this series, output between versions will also be very different, even when using
the same model "formula", as each version uses "random pruning" to alter the final model.
Each model is then evaluated, and the "winners" are uploaded.
A "winner" means new positive change(s) have occured in model instruction following and/or output generation.
You can see some of these wordstorm version "Dark Planets" in this model:
[ https://huggingface.co/DavidAU/L3-MOE-8X8B-Dark-Planet-8D-Mirrored-Chaos-47B-GGUF ]
[ https://huggingface.co/DavidAU/L3-MOE-8X8B-Dark-Planet-8D-Mirrored-Chaos-47B ]
MERGEKIT Formula:
```
models:
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
weight: [1,1,.75,.5,.25,.25,.05,.01]
density: .8
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
weight: [0,0,.25,.35,.4,.25,.30,.04]
density: .6
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
parameters:
weight: [0,0,0,.15,.35,.5,.65,.95]
density: .8
merge_method: dare_ties
base_model: meta-llama/Meta-Llama-3-8B-Instruct
dtype: bfloat16
```
NOTE:
This will NOT produce the "exact" version of this model (operation / output / attributes) because of the "density" settings.
Density introduces random pruning into the model which can have minor to major impacts in performance from slightly negative/positive
to very strongly positive/negative.
Each time you "create" this model (in mergekit) you will get a different model. This is NOT a fault or error, it is a feature of using "density".
The closer to "1" in terms of "density" the less pruning will occur, with NO pruning occuring at density of "1".
MERGEKIT:
https://github.com/arcee-ai/mergekit
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[[ coming soon || left side menu under "quantizations" ]] |
winnieyangwannan/refusal_Llama-3.1-8B-Instruct_mlp_positive-negative-addition-opposite_last_layer_24_2_49 | winnieyangwannan | 2025-06-03T00:27:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:25:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/refusal_Llama-3.1-8B-Instruct_mlp_positive-negative-addition-opposite_last_layer_28_2_49 | winnieyangwannan | 2025-06-03T00:27:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:25:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/refusal_Llama-3.1-8B-Instruct_mlp_positive-negative-addition-opposite_last_layer_0_2_49 | winnieyangwannan | 2025-06-03T00:27:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:25:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
winnieyangwannan/refusal_Llama-3.1-8B-Instruct_mlp_positive-negative-addition-opposite_last_layer_4_2_49 | winnieyangwannan | 2025-06-03T00:27:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-03T00:25:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Brokencircut3311/Julie | Brokencircut3311 | 2025-06-03T00:27:10Z | 0 | 1 | fasttext | [
"fasttext",
"code",
"text-to-speech",
"en",
"dataset:open-r1/Mixture-of-Thoughts",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:finetune:deepseek-ai/DeepSeek-R1-0528",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2025-06-03T00:25:07Z | ---
license: apache-2.0
datasets:
- open-r1/Mixture-of-Thoughts
language:
- en
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1-0528
new_version: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
pipeline_tag: text-to-speech
library_name: fasttext
tags:
- code
--- |
Subsets and Splits