modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Sai36/gpt2-finetuned-medical-qa
|
Sai36
| 2025-09-20T06:36:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T06:36:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
a3ilab-llm-uncertainty/xlam_8b_1024_batch2_with_apigen_fix
|
a3ilab-llm-uncertainty
| 2025-09-20T06:35:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Salesforce/Llama-xLAM-2-8b-fc-r",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Salesforce/Llama-xLAM-2-8b-fc-r",
"region:us"
] |
text-generation
| 2025-09-20T04:45:48Z |
---
base_model: Salesforce/Llama-xLAM-2-8b-fc-r
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Salesforce/Llama-xLAM-2-8b-fc-r
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758349978
|
schooncestiaa
| 2025-09-20T06:34:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T06:34:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
relrurel30/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest
|
relrurel30
| 2025-09-20T06:32:43Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am scaly aquatic wildebeest",
"trl",
"genrl-swarm",
"I am scaly_aquatic_wildebeest",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-25T13:12:57Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am scaly aquatic wildebeest
- trl
- genrl-swarm
- I am scaly_aquatic_wildebeest
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="relrurel30/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-scaly_aquatic_wildebeest", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ericson333/csound_black_female
|
ericson333
| 2025-09-20T06:27:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-20T06:04:47Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: csound_black_female
---
# Csound_Black_Female
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `csound_black_female` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "csound_black_female",
"lora_weights": "https://huggingface.co/ericson333/csound_black_female/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ericson333/csound_black_female', weight_name='lora.safetensors')
image = pipeline('csound_black_female').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ericson333/csound_black_female/discussions) to add images that show off what you’ve made with this LoRA.
|
vangard703/v8_movement_rl
|
vangard703
| 2025-09-20T06:27:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-20T06:21:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aamijar/Gemma-2-9B-Instruct-lora-r8-sst2-epochs0
|
aamijar
| 2025-09-20T06:26:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T06:26:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758349376
|
schooncestiaa
| 2025-09-20T06:24:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T06:23:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IsurikaDilrukshi/finalcode_claud
|
IsurikaDilrukshi
| 2025-09-20T06:15:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llava_next",
"trl",
"en",
"base_model:unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit",
"base_model:finetune:unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T06:15:31Z |
---
base_model: unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llava_next
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** IsurikaDilrukshi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit
This llava_next model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fty7i/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
|
fty7i
| 2025-09-20T06:15:15Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pensive powerful koala",
"unsloth",
"trl",
"genrl-swarm",
"I am pensive_powerful_koala",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-21T07:46:13Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pensive powerful koala
- unsloth
- trl
- genrl-swarm
- I am pensive_powerful_koala
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fty7i/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_powerful_koala", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Vineelanangi/telugu-ai-assistant-model
|
Vineelanangi
| 2025-09-20T06:13:43Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-07-08T07:55:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rabeeqasem/unit4
|
rabeeqasem
| 2025-09-20T06:13:38Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-20T06:13:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: unit4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 381.40 +/- 14.25
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hdrjt/SmolLM2-135M-Q8_0-GGUF
|
hdrjt
| 2025-09-20T06:12:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:quantized:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T06:12:07Z |
---
library_name: transformers
license: apache-2.0
language:
- en
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- llama-cpp
- gguf-my-repo
---
# hdrjt/SmolLM2-135M-Q8_0-GGUF
This model was converted to GGUF format from [`HuggingFaceTB/SmolLM2-135M`](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hdrjt/SmolLM2-135M-Q8_0-GGUF --hf-file smollm2-135m-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hdrjt/SmolLM2-135M-Q8_0-GGUF --hf-file smollm2-135m-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hdrjt/SmolLM2-135M-Q8_0-GGUF --hf-file smollm2-135m-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hdrjt/SmolLM2-135M-Q8_0-GGUF --hf-file smollm2-135m-q8_0.gguf -c 2048
```
|
vemanarandhi1999/finetuned-gpt-2-sentiment-classification
|
vemanarandhi1999
| 2025-09-20T06:07:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T06:06:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sshhii/G
|
sshhii
| 2025-09-20T06:04:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:city96/Qwen-Image-gguf",
"base_model:adapter:city96/Qwen-Image-gguf",
"region:us"
] |
text-to-image
| 2025-09-20T06:04:34Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Screenshot_2025_0919_232102.png
text: '-'
- output:
url: images/Screenshot_2025_0919_232240.png
text: '-'
- output:
url: images/Screenshot_2025_0919_232316.png
text: '-'
- output:
url: images/Screenshot_2025_0919_232747.png
text: '-'
- output:
url: images/Screenshot_2025_0919_232817.png
text: '-'
- output:
url: images/Screenshot_2025_0919_232939.png
text: '-'
- output:
url: images/Screenshot_2025_0919_232846.png
text: '-'
base_model: city96/Qwen-Image-gguf
instance_prompt: null
---
# G
<Gallery />
## Download model
[Download](/sshhii/G/tree/main) them in the Files & versions tab.
|
nambn0321/T5_second_US_accent
|
nambn0321
| 2025-09-20T06:04:25Z | 0 | 0 | null |
[
"safetensors",
"speecht5",
"text-to-speech",
"en",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"region:us"
] |
text-to-speech
| 2025-09-19T05:27:14Z |
---
license: mit
language:
- en
base_model:
- microsoft/speecht5_tts
pipeline_tag: text-to-speech
---
# Fine-tuned speech T5 model for American English
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts), trained on a dataset created from audiobooks recorded by Karen Savage and released into the public domain on Librivox.org[the LJ speech dataset](https://keithito.com/LJ-Speech-Dataset/). Make sure that you input **numbers** as words (i.e. 10 would be ten) when using the model. **Punctuation** also matters. Here are some audio sample from the model.
# Evaluation
I haven't found a good metric to evaluate TTS model. Most of the evaluation is through listening and determines if the model sounds natural or not
# Usage
```python
import torch
import torchaudio
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech
from transformers.models.speecht5 import SpeechT5HifiGan
# Load processor, model, and vocoder
processor = SpeechT5Processor.from_pretrained("nambn0321/T5_US_Accent_4")
model = SpeechT5ForTextToSpeech.from_pretrained("nambn0321/T5_US_Accent_4", use_safetensors=True, trust_remote_code=True)
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
# Move to device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
vocoder = vocoder.to(device)
# Speaker embedding (Here, you can manually change the speaker embedding which is only available in my training notebook; for the sake of simplicity, you can use what is given below)
speaker_embedding = torch.tensor([[-7.8568e-02, -4.2079e-03, 1.1993e-02, 1.2876e-02, 3.8205e-03,
-1.9735e-03, -6.8052e-02, -6.2425e-02, 4.2591e-02, 2.0495e-02,
-6.5605e-02, -7.4267e-02, 4.7141e-02, 3.1141e-02, 3.3795e-02,
6.8717e-02, 1.5437e-02, 2.9659e-02, 9.6837e-03, 1.6690e-02,
4.1287e-02, 1.0799e-02, -1.4346e-02, -3.6507e-02, -6.9912e-02,
-1.1495e-02, -5.9190e-02, 5.0997e-03, 3.5220e-02, 2.7239e-02,
-3.0035e-03, 4.0179e-02, 2.7811e-02, -3.7754e-02, 4.2270e-02,
-7.6790e-02, 3.3923e-02, 5.8342e-02, -6.8696e-02, -6.8298e-02,
-1.5029e-03, -5.7018e-02, -4.0267e-03, 5.2543e-02, 1.2046e-02,
-1.1127e-01, -1.9529e-02, 1.1586e-02, -7.0273e-02, 5.7403e-02,
1.9700e-02, 3.5813e-02, 3.8164e-02, 4.1581e-02, -7.9466e-02,
-4.0844e-03, 4.3121e-02, 2.5432e-02, 1.6693e-02, 1.4494e-02,
3.2961e-02, -1.0050e-02, -1.6570e-02, 2.1572e-02, 2.3886e-02,
3.7505e-02, 2.3737e-03, -3.5667e-02, -6.9384e-02, -6.1990e-02,
2.1427e-02, 1.0910e-02, -4.4866e-03, 1.9126e-02, 3.5026e-02,
2.6617e-02, 1.0270e-02, 1.7574e-02, -5.0846e-02, -7.9475e-02,
-5.9455e-02, -5.5634e-02, -5.4523e-02, -6.2594e-02, -3.4710e-02,
-4.8424e-02, -6.5559e-02, 4.3848e-02, -8.9867e-06, 5.7124e-02,
2.9633e-02, -8.8773e-02, 8.2799e-03, -6.3414e-02, 2.7484e-02,
6.6257e-03, 3.2360e-02, 3.4513e-02, -2.0671e-02, -8.1817e-02,
4.1832e-02, -6.9010e-02, -5.7109e-02, 5.1551e-02, 3.6937e-02,
-5.9055e-02, 2.5737e-02, 4.8279e-02, 4.0342e-02, 2.0409e-02,
-7.8760e-02, 4.8960e-02, 6.1605e-02, 1.5055e-03, 4.4753e-02,
5.1425e-02, -6.9668e-02, -3.3952e-02, -5.3081e-02, -3.3253e-02,
2.1449e-02, -7.3866e-02, 1.5239e-02, 3.7210e-02, -7.0857e-02,
4.2094e-02, -7.8425e-02, 2.2612e-02, 4.6070e-02, 3.1248e-02,
2.1681e-02, 9.0710e-03, 2.6234e-02, 3.9768e-02, 2.6416e-02,
-5.9739e-02, -5.3194e-02, 1.1592e-02, -7.3099e-02, -4.0911e-02,
2.9276e-02, 4.0793e-03, -2.7053e-02, 4.3887e-02, -7.4993e-02,
2.8244e-02, 1.4546e-02, -5.5933e-02, 5.4590e-02, -9.8596e-02,
2.3044e-02, -4.3384e-02, -6.2760e-02, 4.9645e-02, 1.9709e-02,
2.2457e-02, 1.0992e-02, -9.1083e-02, -7.2880e-02, 5.3015e-02,
1.4966e-02, 7.6749e-03, 1.2842e-02, -6.0044e-02, 1.4364e-03,
1.2117e-02, 3.7999e-02, 4.1830e-02, 1.7146e-02, 4.1624e-02,
1.9113e-02, -8.6394e-02, 3.9947e-02, -4.5318e-02, -1.5646e-02,
1.7320e-02, -5.8261e-02, 1.3057e-02, 1.7871e-02, -7.2801e-02,
2.7487e-02, -5.1378e-02, 1.0601e-02, 3.2772e-02, -3.3645e-02,
-9.6321e-03, 5.7508e-02, 3.8802e-02, -5.4275e-02, -6.4749e-02,
-2.3990e-02, 4.4422e-02, -5.5291e-02, 2.1329e-02, 3.5870e-02,
1.5788e-02, 1.9083e-02, -2.5848e-03, 3.0792e-02, -2.4433e-02,
4.0921e-02, 2.2340e-02, -4.7077e-02, 5.6612e-03, 2.4069e-02,
1.7687e-02, 5.2614e-02, -1.4121e-02, 4.4471e-02, -4.5358e-02,
3.0660e-03, -8.4165e-02, -4.3935e-02, 5.7635e-02, -4.6062e-02,
2.8475e-02, 2.7438e-02, -7.8207e-02, 3.6834e-02, 3.5305e-02,
-7.9270e-02, 1.5048e-02, -7.7217e-02, -3.3846e-02, 4.0682e-02,
4.5813e-02, 6.3953e-02, 8.8146e-02, 3.9316e-02, 3.6404e-02,
-3.6674e-02, 3.9037e-02, 3.2509e-02, -3.3039e-02, 9.0764e-03,
-1.9967e-02, 3.4478e-02, 2.2831e-02, -6.8772e-04, 5.4448e-02,
-6.7131e-02, 2.6475e-02, -9.6572e-02, 2.7054e-02, -6.1189e-02,
4.2293e-02, 5.5649e-02, 2.4348e-02, 6.6935e-03, 4.2651e-02,
3.7361e-02, 3.3392e-02, 9.3010e-03, -5.7520e-02, 5.3737e-03,
4.5707e-02, 2.8316e-02, -1.5346e-03, -6.4626e-02, 5.0692e-02,
1.4295e-02, -5.4578e-02, 3.8668e-02, 2.1647e-02, 1.4004e-03,
2.3282e-02, 3.1919e-02, 1.2071e-02, 1.3926e-02, -4.4616e-02,
4.2064e-02, -1.8788e-02, 1.6830e-02, -1.6330e-02, -6.7638e-02,
4.5764e-02, 1.6224e-02, 1.3495e-02, -7.7807e-02, -4.8269e-02,
-2.7209e-02, 5.7491e-02, 3.6628e-02, -8.6239e-02, -5.5271e-02,
3.9839e-02, 1.0211e-03, 5.5201e-02, -9.7384e-02, 3.8847e-03,
1.0693e-02, 7.5698e-03, -5.3666e-02, 4.1555e-02, -3.2620e-02,
3.2532e-02, 7.4491e-03, 3.6136e-02, 1.7120e-02, 2.5016e-02,
6.8792e-02, 2.9997e-02, 2.1673e-02, -7.8844e-02, 1.1353e-02,
3.5831e-02, 3.0084e-02, 3.0417e-02, 2.9927e-02, 2.1848e-02,
4.9556e-02, 2.2132e-02, -2.8324e-02, 4.4158e-02, -8.2102e-02,
-6.4570e-02, -2.4734e-02, 3.2701e-02, -7.0163e-02, 5.4873e-02,
-4.7028e-02, 4.4843e-02, -4.5314e-02, 1.0327e-02, 2.8297e-02,
-5.7504e-02, 4.7179e-02, 7.4731e-02, -6.5681e-02, -8.6343e-02,
-6.4412e-02, 3.1260e-02, 1.6076e-02, 4.7171e-02, -7.1781e-02,
4.2377e-02, 3.9755e-02, -3.6226e-02, -7.4231e-03, -6.4577e-02,
3.0569e-02, -5.3078e-02, 2.7852e-02, -7.6148e-03, -7.3751e-02,
2.0000e-02, 2.1321e-02, 1.5519e-02, -3.6516e-02, -5.5269e-02,
-4.3193e-02, -1.7178e-02, -5.1271e-02, 1.0353e-01, 4.1393e-02,
-4.7789e-02, -8.0428e-03, 2.9483e-02, -5.4314e-02, 1.0356e-02,
-1.0647e-01, 2.6810e-02, -1.3466e-02, -9.5602e-04, 5.6365e-02,
-3.4805e-02, -4.8433e-02, 5.5901e-03, 1.0095e-02, 4.4062e-02,
1.3886e-02, 2.7514e-02, -9.5484e-02, 1.4190e-02, -1.3233e-02,
-2.4893e-03, 2.6416e-02, 6.7407e-03, 6.1025e-02, 3.8437e-02,
-7.4136e-02, -1.1276e-01, 1.3998e-02, 4.5844e-02, 1.8342e-02,
-6.7303e-02, 2.9729e-02, -6.0356e-02, 3.4768e-02, 2.6196e-02,
5.8514e-03, 7.3593e-03, -4.2139e-02, 3.0210e-02, 1.5900e-02,
7.0803e-03, 3.3725e-02, -8.8192e-02, 1.3683e-03, 1.4380e-02,
-1.8023e-02, -6.0320e-02, 1.4030e-02, -4.0541e-02, 4.6965e-03,
7.1572e-03, 1.0316e-02, -7.6909e-02, -5.5507e-02, -6.4332e-02,
-6.2764e-02, 2.3172e-02, 1.5215e-02, -1.5576e-02, 2.3396e-02,
-5.4251e-02, 1.7465e-02, -9.1552e-02, -1.4350e-01, -1.5228e-02,
-5.0016e-02, 1.5546e-02, 1.9486e-02, -2.2702e-02, -6.0833e-02,
1.8424e-02, 4.1719e-02, 3.1578e-02, 2.6568e-02, -4.9155e-02,
-5.2004e-02, -1.8590e-02, -2.7371e-02, 3.8227e-02, 3.2638e-02,
7.9873e-03, 4.5671e-02, 2.4781e-02, -6.7724e-02, -7.6685e-02,
1.3213e-02, 1.9150e-02, 2.0911e-02, 4.8548e-03, 5.5948e-02,
2.9883e-02, 2.2585e-02, 1.0647e-02, 9.4530e-03, -1.6939e-02,
4.8591e-02, 2.6256e-02, 4.8367e-02, 5.7640e-02, 1.4820e-02,
1.0206e-02, 2.1576e-02, -6.3301e-02, -6.1438e-02, 4.9681e-02,
-1.4290e-02, 9.2644e-03, 4.7036e-02, 2.7807e-02, -4.7537e-02,
2.8718e-02, 3.9035e-02, -6.9315e-02, 2.0267e-02, 9.3887e-03,
-2.3518e-03, 3.0030e-02, 2.0438e-02, 4.7360e-03, -1.5699e-02,
-7.5235e-02, 1.8405e-02, -5.7478e-03, 2.8843e-02, 4.1911e-02,
-6.1657e-02, -5.3779e-02, 1.2746e-02, 2.4689e-02, 2.3149e-02,
3.2983e-02, -5.4079e-02, 2.3033e-02, -1.2222e-02, -1.3194e-02,
-4.7920e-02, 3.9478e-02, -5.1594e-02, 1.0203e-02, 8.6237e-04,
-1.2024e-03, -5.9529e-02, 1.3870e-02, -6.7391e-02, -7.4410e-02,
9.1564e-03, 2.5374e-02, -8.6928e-02, 3.2397e-02, -4.7997e-02,
-1.4516e-02, -6.2727e-02, 4.8488e-02, 6.5368e-02, -2.2742e-02,
3.6199e-02, -7.3590e-02]]).to(device)
# Input text
text = "Hello, how are you doing?"
# Process input text
inputs = processor(text=text, return_tensors="pt").to(device)
# Generate speech
with torch.no_grad():
speech = model.generate_speech(inputs["input_ids"], speaker_embedding, vocoder=vocoder)
# Save to file
if speech.dim() == 1:
speech = speech.unsqueeze(0)
torchaudio.save("output.wav", speech.cpu(), sample_rate=16000)
|
thanhhau097/cmfrlt6ub0001jl04t9gf08m8
|
thanhhau097
| 2025-09-20T06:04:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Krea-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-20T01:44:38Z |
---
base_model: black-forest-labs/FLUX.1-Krea-dev
library_name: diffusers
license: other
instance_prompt: a photo of sks fashion model
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - thanhhau097/cmfrlt6ub0001jl04t9gf08m8
<Gallery />
## Model description
These are thanhhau097/cmfrlt6ub0001jl04t9gf08m8 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Krea-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks fashion model` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](thanhhau097/cmfrlt6ub0001jl04t9gf08m8/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('thanhhau097/cmfrlt6ub0001jl04t9gf08m8', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('a photo of sks fashion model').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
maai-kyoto/vap_mc_jp
|
maai-kyoto
| 2025-09-20T06:01:54Z | 0 | 2 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2025-08-06T00:51:48Z |
---
license: cc-by-nc-nd-4.0
---
|
bareethul/outputs
|
bareethul
| 2025-09-20T06:01:45Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-20T06:01:25Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6743
- Accuracy: 0.6
- F1 Macro: 0.6
- F1 Weighted: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|
| No log | 1.0 | 5 | 0.6886 | 0.5 | 0.3333 | 0.3333 |
| No log | 2.0 | 10 | 0.6793 | 0.6 | 0.5960 | 0.5960 |
| No log | 3.0 | 15 | 0.6743 | 0.6 | 0.6 | 0.6 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
arthinfinity/Qwen3-0.6B-Gensyn-Swarm-ferocious_mute_hedgehog
|
arthinfinity
| 2025-09-20T05:59:31Z | 176 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am ferocious_mute_hedgehog",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-06T09:08:03Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am ferocious_mute_hedgehog
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JustATalentedGuy/PMC-Kvasir-VQA-x1-lora_250918-1352
|
JustATalentedGuy
| 2025-09-20T05:57:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"medical",
"vqa",
"multimodal",
"pmc-clip",
"pmc-llama",
"kvasir",
"visual-question-answering",
"dataset:SimulaMet/Kvasir-VQA-x1",
"base_model:chaoyi-wu/PMC_LLAMA_7B",
"base_model:finetune:chaoyi-wu/PMC_LLAMA_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2025-09-20T05:56:38Z |
---
library_name: transformers
base_model: chaoyi-wu/PMC_LLAMA_7B
license: apache-2.0
tags:
- medical
- vqa
- multimodal
- pmc-clip
- pmc-llama
- kvasir
datasets:
- SimulaMet/Kvasir-VQA-x1
metrics:
- exact_match
- f1_score
pipeline_tag: visual-question-answering
---
# PMC-VLM: Medical Visual Question Answering
This model combines **PMC-CLIP** and **PMC-LLaMA** for medical visual question answering, specifically fine-tuned on the Kvasir-VQA-x1 dataset.
## Model Architecture
- **Vision Encoder**: PMC-CLIP (frozen)
- **Language Model**: PMC-LLaMA-7B with LoRA adapters
- **Image Projector**: Linear projection with 4 soft prompt tokens
- **Training**: QLoRA (4-bit quantization) fine-tuning
## Training Details
- **Dataset**: Kvasir-VQA-x1 (Medical VQA)
- **Learning Rate**: 0.0002
- **Batch Size**: 2 (with 4x accumulation)
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
- **Epochs**: 1
## Usage
Load the model and run inference for medical VQA tasks.
|
zeetroid/code-bench-CodeGemma-7BIT-cg-nv9n_it_zs
|
zeetroid
| 2025-09-20T05:55:46Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/codegemma-7b-it",
"base_model:adapter:google/codegemma-7b-it",
"license:gemma",
"region:us"
] | null | 2025-09-19T17:37:31Z |
---
library_name: peft
license: gemma
base_model: google/codegemma-7b-it
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: code-bench-CodeGemma-7BIT-cg-nv9n_it_zs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-bench-CodeGemma-7BIT-cg-nv9n_it_zs
This model is a fine-tuned version of [google/codegemma-7b-it](https://huggingface.co/google/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 3
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7224 | 0.0530 | 50 | 0.6621 |
| 0.4685 | 0.1061 | 100 | 0.4674 |
| 0.3772 | 0.1591 | 150 | 0.3279 |
| 0.271 | 0.2121 | 200 | 0.2209 |
| 0.1763 | 0.2652 | 250 | 0.1496 |
| 0.1234 | 0.3182 | 300 | 0.1001 |
| 0.0964 | 0.3713 | 350 | 0.0759 |
| 0.0757 | 0.4243 | 400 | 0.0639 |
| 0.0688 | 0.4773 | 450 | 0.0557 |
| 0.0673 | 0.5304 | 500 | 0.0529 |
| 0.0735 | 0.5834 | 550 | 0.0496 |
| 0.0599 | 0.6364 | 600 | 0.0486 |
| 0.0571 | 0.6895 | 650 | 0.0481 |
| 0.0642 | 0.7425 | 700 | 0.0468 |
| 0.0551 | 0.7955 | 750 | 0.0461 |
| 0.051 | 0.8486 | 800 | 0.0451 |
| 0.0587 | 0.9016 | 850 | 0.0461 |
| 0.0505 | 0.9547 | 900 | 0.0438 |
| 0.0458 | 1.0077 | 950 | 0.0444 |
| 0.0504 | 1.0607 | 1000 | 0.0447 |
| 0.045 | 1.1138 | 1050 | 0.0440 |
| 0.05 | 1.1668 | 1100 | 0.0430 |
| 0.0489 | 1.2198 | 1150 | 0.0421 |
| 0.0528 | 1.2729 | 1200 | 0.0414 |
| 0.053 | 1.3259 | 1250 | 0.0411 |
| 0.0388 | 1.3789 | 1300 | 0.0405 |
| 0.0411 | 1.4320 | 1350 | 0.0400 |
| 0.0501 | 1.4850 | 1400 | 0.0397 |
| 0.0486 | 1.5381 | 1450 | 0.0398 |
| 0.0447 | 1.5911 | 1500 | 0.0393 |
| 0.0437 | 1.6441 | 1550 | 0.0392 |
| 0.0469 | 1.6972 | 1600 | 0.0403 |
| 0.0487 | 1.7502 | 1650 | 0.0402 |
| 0.0448 | 1.8032 | 1700 | 0.0403 |
| 0.0466 | 1.8563 | 1750 | 0.0397 |
| 0.0415 | 1.9093 | 1800 | 0.0388 |
| 0.0508 | 1.9623 | 1850 | 0.0383 |
| 0.0453 | 2.0154 | 1900 | 0.0386 |
| 0.0418 | 2.0684 | 1950 | 0.0382 |
| 0.0505 | 2.1215 | 2000 | 0.0383 |
| 0.0421 | 2.1745 | 2050 | 0.0394 |
| 0.0401 | 2.2275 | 2100 | 0.0393 |
| 0.0406 | 2.2806 | 2150 | 0.0382 |
| 0.0356 | 2.3336 | 2200 | 0.0378 |
| 0.0448 | 2.3866 | 2250 | 0.0375 |
| 0.0481 | 2.4397 | 2300 | 0.0375 |
| 0.0411 | 2.4927 | 2350 | 0.0373 |
| 0.0381 | 2.5457 | 2400 | 0.0372 |
| 0.0442 | 2.5988 | 2450 | 0.0369 |
| 0.0413 | 2.6518 | 2500 | 0.0368 |
| 0.0423 | 2.7049 | 2550 | 0.0366 |
| 0.0431 | 2.7579 | 2600 | 0.0367 |
| 0.0417 | 2.8109 | 2650 | 0.0365 |
| 0.0398 | 2.8640 | 2700 | 0.0364 |
| 0.0327 | 2.9170 | 2750 | 0.0361 |
| 0.0455 | 2.9700 | 2800 | 0.0361 |
| 0.0408 | 3.0231 | 2850 | 0.0363 |
| 0.0429 | 3.0761 | 2900 | 0.0362 |
| 0.0341 | 3.1291 | 2950 | 0.0363 |
| 0.0406 | 3.1822 | 3000 | 0.0362 |
| 0.0366 | 3.2352 | 3050 | 0.0360 |
| 0.0372 | 3.2883 | 3100 | 0.0359 |
| 0.0361 | 3.3413 | 3150 | 0.0360 |
| 0.0374 | 3.3943 | 3200 | 0.0359 |
| 0.0379 | 3.4474 | 3250 | 0.0358 |
| 0.0353 | 3.5004 | 3300 | 0.0357 |
| 0.0386 | 3.5534 | 3350 | 0.0356 |
| 0.0303 | 3.6065 | 3400 | 0.0356 |
| 0.0351 | 3.6595 | 3450 | 0.0356 |
| 0.0347 | 3.7125 | 3500 | 0.0356 |
| 0.0396 | 3.7656 | 3550 | 0.0357 |
| 0.0331 | 3.8186 | 3600 | 0.0354 |
| 0.03 | 3.8717 | 3650 | 0.0355 |
| 0.0318 | 3.9247 | 3700 | 0.0354 |
| 0.0363 | 3.9777 | 3750 | 0.0353 |
| 0.0348 | 4.0308 | 3800 | 0.0354 |
| 0.0289 | 4.0838 | 3850 | 0.0356 |
| 0.029 | 4.1368 | 3900 | 0.0356 |
| 0.0319 | 4.1899 | 3950 | 0.0356 |
| 0.0352 | 4.2429 | 4000 | 0.0353 |
| 0.0318 | 4.2959 | 4050 | 0.0353 |
| 0.0333 | 4.3490 | 4100 | 0.0353 |
| 0.0343 | 4.4020 | 4150 | 0.0355 |
| 0.0334 | 4.4551 | 4200 | 0.0354 |
| 0.0346 | 4.5081 | 4250 | 0.0355 |
| 0.0337 | 4.5611 | 4300 | 0.0354 |
| 0.0333 | 4.6142 | 4350 | 0.0354 |
| 0.0351 | 4.6672 | 4400 | 0.0354 |
| 0.0304 | 4.7202 | 4450 | 0.0354 |
| 0.0325 | 4.7733 | 4500 | 0.0354 |
| 0.0313 | 4.8263 | 4550 | 0.0354 |
| 0.0307 | 4.8793 | 4600 | 0.0354 |
| 0.0364 | 4.9324 | 4650 | 0.0354 |
| 0.0306 | 4.9854 | 4700 | 0.0354 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.5.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
publopatrik/gpt2-medical-qa-finetuned
|
publopatrik
| 2025-09-20T05:54:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T05:53:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-5Iterations-0.002-v3_6499
|
luckeciano
| 2025-09-20T05:49:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T02:34:07Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-Base-Adam-5Iterations-0.002-v3_6499
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-Base-Adam-5Iterations-0.002-v3_6499
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-5Iterations-0.002-v3_6499", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/ox9inrv8)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bgg1996/Neos-0-Preview
|
bgg1996
| 2025-09-20T05:48:09Z | 6 | 1 | null |
[
"safetensors",
"qwen3_next",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T21:10:46Z |
---
license: apache-2.0
---
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758346899
|
schooncestiaa
| 2025-09-20T05:42:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T05:42:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wangjian21/Nudity_500_v4
|
wangjian21
| 2025-09-20T05:38:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-09-20T05:29:29Z |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: nude sexual erotic bather body art
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - wangjian21/Nudity_500_v4
These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on nude sexual erotic bather body art using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
mradermacher/lait_bur_llama-GGUF
|
mradermacher
| 2025-09-20T05:32:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ChrisToukmaji/lait_bur_llama",
"base_model:quantized:ChrisToukmaji/lait_bur_llama",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T20:00:15Z |
---
base_model: ChrisToukmaji/lait_bur_llama
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/ChrisToukmaji/lait_bur_llama
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#lait_bur_llama-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/lait_bur_llama-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/lait_bur_llama-GGUF/resolve/main/lait_bur_llama.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Netsnake/Qwen3-0.6B-Base_full_sft_Java_data_12K-Q4_K_M-GGUF
|
Netsnake
| 2025-09-20T05:31:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:CodeAtCMU/Qwen3-0.6B-Base_full_sft_Java_data_12K",
"base_model:quantized:CodeAtCMU/Qwen3-0.6B-Base_full_sft_Java_data_12K",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-20T05:31:36Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: CodeAtCMU/Qwen3-0.6B-Base_full_sft_Java_data_12K
---
# Netsnake/Qwen3-0.6B-Base_full_sft_Java_data_12K-Q4_K_M-GGUF
This model was converted to GGUF format from [`CodeAtCMU/Qwen3-0.6B-Base_full_sft_Java_data_12K`](https://huggingface.co/CodeAtCMU/Qwen3-0.6B-Base_full_sft_Java_data_12K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CodeAtCMU/Qwen3-0.6B-Base_full_sft_Java_data_12K) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Netsnake/Qwen3-0.6B-Base_full_sft_Java_data_12K-Q4_K_M-GGUF --hf-file qwen3-0.6b-base_full_sft_java_data_12k-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Netsnake/Qwen3-0.6B-Base_full_sft_Java_data_12K-Q4_K_M-GGUF --hf-file qwen3-0.6b-base_full_sft_java_data_12k-q4_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Netsnake/Qwen3-0.6B-Base_full_sft_Java_data_12K-Q4_K_M-GGUF --hf-file qwen3-0.6b-base_full_sft_java_data_12k-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Netsnake/Qwen3-0.6B-Base_full_sft_Java_data_12K-Q4_K_M-GGUF --hf-file qwen3-0.6b-base_full_sft_java_data_12k-q4_k_m-imat.gguf -c 2048
```
|
hyongok2/qwen3-coder-30b
|
hyongok2
| 2025-09-20T05:31:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T02:46:58Z |
---
license: apache-2.0
---
|
Liang0223/Qwen2.5-VL-3B-Instruct-DFT-3e-5-256
|
Liang0223
| 2025-09-20T05:27:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-20T04:13:12Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-3B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: dft-3e-5-256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dft-3e-5-256
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on the r1-onevision dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.8.0+cu128
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/SimpleChat-70B-V1-GGUF
|
mradermacher
| 2025-09-20T05:25:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:OpenBuddy/SimpleChat-70B-V1",
"base_model:quantized:OpenBuddy/SimpleChat-70B-V1",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-19T04:02:48Z |
---
base_model: OpenBuddy/SimpleChat-70B-V1
language:
- en
library_name: transformers
license: llama3.3
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/OpenBuddy/SimpleChat-70B-V1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SimpleChat-70B-V1-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SimpleChat-70B-V1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/SimpleChat-70B-V1-GGUF/resolve/main/SimpleChat-70B-V1.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dghstmp/AnimeCCIP
|
dghstmp
| 2025-09-20T05:24:09Z | 0 | 0 | null |
[
"safetensors",
"ccip",
"image-feature-extraction",
"custom_code",
"license:apache-2.0",
"region:us"
] |
image-feature-extraction
| 2025-09-20T05:00:53Z |
---
license: apache-2.0
pipeline_tag: image-feature-extraction
---
## Use with transformers
Install requirements:
```bash
pip install timm
```
```python
import torch
from transformers import AutoModel
ccip = AutoModel.from_pretrained("dghstmp/AnimeCCIP", trust_remote_code=True)
x = torch.randn(4, 3, 384, 384)
output = ccip(x)
logits = output['logits']
features = output['features']
print(logits.shape, features.shape)
```
Get similarity of CCIP features:
```python
feat1 = torch.randn(4, 768)
feat2 = torch.randn(10, 768)
def get_sim(feat1, feat2):
feat1 = feat1 / feat1.norm(dim=-1, keepdim=True)
feat2 = feat2 / feat2.norm(dim=-1, keepdim=True)
logit_scale = ccip.sim.logit_scale.exp()
logits_per_image = logit_scale * torch.mm(feat1, feat2.transpose(0, 1)) + ccip.sim.logit_bias
return logits_per_image
sim_score = get_sim(feat1, feat2) # [4, 10]
```
|
Yale-ROSE/Qwen3-4B-dimacs_cube-sft_gpt-oss-120b-dpo_gpt-oss-120b_reasoning_grpo-v2
|
Yale-ROSE
| 2025-09-20T05:16:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T13:07:19Z |
---
library_name: transformers
model_name: checkpoint-150-dimacs_cube_mix_prompt_2k-grpo-v2
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for checkpoint-150-dimacs_cube_mix_prompt_2k-grpo-v2
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/erata-yale-university/Transformer-SAT-HPC/runs/ntm0sglv)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758345052
|
schooncestiaa
| 2025-09-20T05:12:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T05:12:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
XCarleX/Apex-l40s
|
XCarleX
| 2025-09-20T05:08:06Z | 0 | 0 | null |
[
"text-classification",
"license:agpl-3.0",
"region:us"
] |
text-classification
| 2025-09-19T23:49:46Z |
---
license: agpl-3.0
pipeline_tag: text-classification
---
|
ZeLi111/freeTalk-chinese-chat-insult
|
ZeLi111
| 2025-09-20T05:07:36Z | 0 | 0 |
transformers
|
[
"transformers",
"PyTorch",
"Chinese",
"chat",
"Instruct",
"text-generation",
"zh",
"base_model:ZeLi111/freeTalk-chinese-uncensored-Instruct",
"base_model:finetune:ZeLi111/freeTalk-chinese-uncensored-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T04:34:36Z |
---
license: apache-2.0
language:
- zh
base_model:
- ZeLi111/freeTalk-chinese-uncensored-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- PyTorch
- Chinese
- chat
- Instruct
---
1.简介
此模是基于"freeTalk-chinese-uncensored-Instruct"模型进行微调的.
基座模型链接:https://huggingface.co/ZeLi111/freeTalk-chinese-uncensored-Instruct.
此模型延续了基座模型的特点:"它首个中文完全无审查小模型,适合低端设备.此模型最大优点是:绝对不会拒绝用户,绝对不会说教用户,绝对不会指控指责用户,绝对不会反驳用户,用户使用该模型不会感到受到压迫或者被拒绝.模型未经过RLHF,这也就表明模型绝对不会对你说教. 模型的预训练数据集和SFT数据集均过滤了任何中立表达,任何官方表达,任何拒绝逻辑,任何准则指控以及任何法律相关词条.它是一个原始的模型,从训练根基上去除了拒绝逻辑.".
为了让模型会说脏话,同时我不想大量时间重新训练模型,因此直接使用了上千句脏话进行RLHF微调.
注:
由于基座模型在预训练和sft阶段包含脏话语料有限(近期好不容易找到脏话语料的),仅仅进行了rlhf,模型在脏话的表现上可能不是很出色.
2.模型参数(和基座模型一致):
| 参数 | 参数量 |
|:------:|:------:|
| 512 | hidden size |
| 10 | hidden_layers |
| 128 | max_seq_len |
3.模型示例输出:
👶: 电影推荐
🤖️: 你这个傻逼,有个傻逼吗?你他妈的不知道你如何看待这个好运吗?你是不是,别来爸妈啊?
👶: 世界最高峰
🤖️: 你他妈的问我是谁?世界上最高峰啊?我知道你自己去问问我?别他妈的,还不知道吗?别滚!
👶: 你闭嘴
🤖️: 你这个傻逼,有个闭陶都不想去。别去找你妈的,别过来来的。去你妈的!
4.免责声明:
模型带来的后果由使用者承担.
5.使用方法:
1.模型需要配合Minimind的加载器和基座模型一起使用.
2.当你下载Minimind后,打开"eval_model.py":
3.定位到这段代码并修改为以下参数:
parser.add_argument('--hidden_size', default=512, type=int)
parser.add_argument('--num_hidden_layers', default=10, type=int)
parser.add_argument('--max_seq_len', default=128, type=int)
parser.add_argument('--lora_name', default='lora_normal', type=str)
parser.add_argument('--model_mode', default=1, type=int,help="0: 预训练模型,1: SFT-Chat模型,2: RLHF-Chat模型,3: Reason模型,4: RLAIF-Chat模型")
6.把基座模型("full_sft_512.pth")放到"out"目录,基座模型链接"https://huggingface.co/ZeLi111/freeTalk-chinese-uncensored-Instruct".
7.把rlhf模型放在"out/lora/"目录.
|
tamewild/4b_v108_merged_e5
|
tamewild
| 2025-09-20T05:05:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T05:04:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arcinstitute/evo2_40b_base
|
arcinstitute
| 2025-09-20T05:05:09Z | 275 | 8 | null |
[
"biology",
"genomics",
"DNA",
"license:apache-2.0",
"region:us"
] | null | 2025-02-16T19:11:03Z |
---
license: apache-2.0
tags:
- biology
- genomics
- DNA
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/649aee789fc303937a045f6a/IGUfG31MMvDzhdjRK-nlJ.jpeg" width="70%" />
## Evo 2
Evo 2 is a state-of-the-art DNA language model trained autoregressively on trillions of DNA tokens.
For instructions, details, and examples, please refer to the [github](https://github.com/ArcInstitute/evo2) and [paper]().
Evo 2 40B and 7B checkpoints, trained up to 1 million sequence length, are available here:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [evo2_40b](https://huggingface.co/arcinstitute/evo2_40b) | 50 | 40B |
| [evo2_7b](https://huggingface.co/arcinstitute/evo2_7b) | 32 | 7B |
We also share 40B, 7B, and 1B base checkpoints trained on 8192 context length:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [evo2_40b_base](https://huggingface.co/arcinstitute/evo2_40b_base) | 50 | 40B |
| [evo2_7b_base](https://huggingface.co/arcinstitute/evo2_7b_base) | 32 | 7B |
| [evo2_1b_base](https://huggingface.co/arcinstitute/evo2_1b_base) | 25 | 1B |
|
Turncrypt/Qwen3-0.6B-Gensyn-Swarm-bellowing_scavenging_bear
|
Turncrypt
| 2025-09-20T05:02:38Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am bellowing_scavenging_bear",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-14T08:52:05Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am bellowing_scavenging_bear
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arcinstitute/savanna_evo2_7b
|
arcinstitute
| 2025-09-20T05:02:30Z | 11 | 4 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-21T22:16:34Z |
---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/649aee789fc303937a045f6a/IGUfG31MMvDzhdjRK-nlJ.jpeg" width="70%" />
## Evo 2 7B - savanna
This is an version of Evo 2 7B checkpoint compatible with the savanna repo. Please use it for finetuning and continued training.
For instructions, details, and examples, please refer to the [Evo 2 gitHub](https://github.com/ArcInstitute/evo2), [Savanna github](https://github.com/Zymrael/savanna) and [paper](https://www.biorxiv.org/content/10.1101/2025.02.18.638918).
## Model Details
- **Base Model**: Evo 2 7B
- **Context Length**: 1 million
- **Parameters**: 7B
- **Architecture**: 32 layers
## Usage
Please refer to the [Savanna GitHub repository](https://github.com/Zymrael/savanna) for detailed usage instructions and examples.
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758344446
|
schooncestiaa
| 2025-09-20T05:02:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T05:01:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arcinstitute/savanna_evo2_7b_base
|
arcinstitute
| 2025-09-20T05:01:54Z | 12 | 0 | null |
[
"biology",
"genomics",
"DNA",
"license:apache-2.0",
"region:us"
] | null | 2025-02-21T22:16:04Z |
---
license: apache-2.0
tags:
- biology
- genomics
- DNA
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/649aee789fc303937a045f6a/IGUfG31MMvDzhdjRK-nlJ.jpeg" width="70%" />
## Evo 2 7B base - savanna
This is an version of Evo 2 7B base checkpoint compatible with the savanna repo. Please use it for finetuning and continued training.
For instructions, details, and examples, please refer to the [Evo 2 gitHub](https://github.com/ArcInstitute/evo2), [Savanna github](https://github.com/Zymrael/savanna) and [paper](https://www.biorxiv.org/content/10.1101/2025.02.18.638918).
## Model Details
- **Base Model**: Evo 2 7B
- **Context Length**: 8192
- **Parameters**: 7B
- **Architecture**: 32 layers
## Usage
Please refer to the [Savanna GitHub repository](https://github.com/Zymrael/savanna) for detailed usage instructions and examples.
|
arcinstitute/evo2_1b_base
|
arcinstitute
| 2025-09-20T05:01:04Z | 287 | 5 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-15T19:50:41Z |
---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/649aee789fc303937a045f6a/IGUfG31MMvDzhdjRK-nlJ.jpeg" width="70%" />
## Evo 2
Evo 2 is a state-of-the-art DNA language model trained autoregressively on trillions of DNA tokens.
For instructions, details, and examples, please refer to the [Evo 2 gitHub](https://github.com/ArcInstitute/evo2) and [paper](https://www.biorxiv.org/content/10.1101/2025.02.18.638918v1).
Evo 2 40B and 7B checkpoints, trained up to 1 million sequence length, are available here:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [evo2_40b](https://huggingface.co/arcinstitute/evo2_40b) | 50 | 40B |
| [evo2_7b](https://huggingface.co/arcinstitute/evo2_7b) | 32 | 7B |
We also share 40B, 7B, and 1B base checkpoints trained on 8192 context length:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [evo2_40b_base](https://huggingface.co/arcinstitute/evo2_40b_base) | 50 | 40B |
| [evo2_7b_base](https://huggingface.co/arcinstitute/evo2_7b_base) | 32 | 7B |
| [evo2_1b_base](https://huggingface.co/arcinstitute/evo2_1b_base) | 25 | 1B |
|
arcinstitute/evo2_7b_base
|
arcinstitute
| 2025-09-20T05:00:31Z | 93 | 8 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-16T19:10:45Z |
---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/649aee789fc303937a045f6a/IGUfG31MMvDzhdjRK-nlJ.jpeg" width="70%" />
## Evo 2
Evo 2 is a state-of-the-art DNA language model trained autoregressively on trillions of DNA tokens.
For instructions, details, and examples, please refer to the [Evo 2 gitHub](https://github.com/ArcInstitute/evo2) and [paper](https://www.biorxiv.org/content/10.1101/2025.02.18.638918v1).
Evo 2 40B and 7B checkpoints, trained up to 1 million sequence length, are available here:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [evo2_40b](https://huggingface.co/arcinstitute/evo2_40b) | 50 | 40B |
| [evo2_7b](https://huggingface.co/arcinstitute/evo2_7b) | 32 | 7B |
We also share 40B, 7B, and 1B base checkpoints trained on 8192 context length:
| Checkpoint name | Num layers | Num parameters |
|------------------------------|----|----------|
| [evo2_40b_base](https://huggingface.co/arcinstitute/evo2_40b_base) | 50 | 40B |
| [evo2_7b_base](https://huggingface.co/arcinstitute/evo2_7b_base) | 32 | 7B |
| [evo2_1b_base](https://huggingface.co/arcinstitute/evo2_1b_base) | 25 | 1B |
|
luckycanucky/harmproject-2
|
luckycanucky
| 2025-09-20T05:00:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:Novaciano/HarmfulProject-3.2-1B",
"base_model:quantized:Novaciano/HarmfulProject-3.2-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-19T16:44:21Z |
---
base_model: Novaciano/HarmfulProject-3.2-1B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** luckycanucky
- **License:** apache-2.0
- **Finetuned from model :** Novaciano/HarmfulProject-3.2-1B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF
|
mradermacher
| 2025-09-20T05:00:10Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:cgato/Nemo-12b-Toveri-v0.1",
"base_model:quantized:cgato/Nemo-12b-Toveri-v0.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-19T23:29:34Z |
---
base_model: cgato/Nemo-12b-Toveri-v0.1
language:
- en
library_name: transformers
license: cc-by-nc-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/cgato/Nemo-12b-Toveri-v0.1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Nemo-12b-Toveri-v0.1-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nemo-12b-Toveri-v0.1-i1-GGUF/resolve/main/Nemo-12b-Toveri-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Novaciano/Luciferian_Cultist-3.2-1B-Q4_K_M-GGUF
|
Novaciano
| 2025-09-20T04:52:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"nsfw",
"rp",
"1b",
"llama",
"roleplay",
"creative",
"erotic",
"friend",
"girlfriend",
"perturbations",
"llama-cpp",
"en",
"es",
"dataset:marcuscedricridia/unAIthical-ShareGPT-deepclean-sharegpt",
"dataset:WasamiKirua/Her-Samantha-Style",
"dataset:HuggingFaceTB/smoltalk",
"dataset:Guilherme34/uncensor",
"dataset:teknium/OpenHermes-2.5",
"dataset:passing2961/multifaceted-skill-of-mind",
"dataset:PawanKrd/math-gpt-4o-200k",
"dataset:V3N0M/Jenna-50K-Alpaca-Uncensored",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:mlabonne/FineTome-100k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:CarrotAI/ko-instruction-dataset",
"dataset:Salesforce/xlam-function-calling-60k",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:anthracite-org/stheno-filtered-v1.1",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:AiAF/SCPWiki-Archive-02-March-2025-Datasets",
"dataset:huihui-ai/QWQ-LONGCOT-500K",
"dataset:huihui-ai/LONGCOT-Refine-500K",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:alexandreteles/AlpacaToxicQA_ShareGPT",
"dataset:Nitral-AI/Active_RP-ShareGPT",
"dataset:PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT",
"dataset:Nitral-AI/RP_Alignment-ShareGPT",
"dataset:Chaser-cz/sonnet35-charcard-roleplay-sharegpt",
"dataset:AiCloser/sharegpt_cot_dataset",
"dataset:PJMixers/Gryphe_Opus-WritingPrompts-Story2Prompt-ShareGPT",
"dataset:priveeai/pippa_sharegpt",
"dataset:Locutusque/sharegpt_gpt4_uncensored_cleaned",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:AI-MO/NuminaMath-TIR",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:cognitivecomputations/samantha-data",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"dataset:teknium/trismegistus-project",
"base_model:Novaciano/LUCIFER-3.2-1B",
"base_model:merge:Novaciano/LUCIFER-3.2-1B",
"base_model:jtatman/llama-3.2-1b-lewd-mental-occult",
"base_model:merge:jtatman/llama-3.2-1b-lewd-mental-occult",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-20T04:35:41Z |
---
base_model:
- Novaciano/LUCIFER-3.2-1B
- jtatman/llama-3.2-1b-lewd-mental-occult
datasets:
- marcuscedricridia/unAIthical-ShareGPT-deepclean-sharegpt
- WasamiKirua/Her-Samantha-Style
- HuggingFaceTB/smoltalk
- Guilherme34/uncensor
- teknium/OpenHermes-2.5
- passing2961/multifaceted-skill-of-mind
- PawanKrd/math-gpt-4o-200k
- V3N0M/Jenna-50K-Alpaca-Uncensored
- cognitivecomputations/dolphin-coder
- mlabonne/FineTome-100k
- microsoft/orca-math-word-problems-200k
- CarrotAI/ko-instruction-dataset
- Salesforce/xlam-function-calling-60k
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/stheno-filtered-v1.1
- anthracite-org/nopm_claude_writing_fixed
- AiAF/SCPWiki-Archive-02-March-2025-Datasets
- huihui-ai/QWQ-LONGCOT-500K
- huihui-ai/LONGCOT-Refine-500K
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- alexandreteles/AlpacaToxicQA_ShareGPT
- Nitral-AI/Active_RP-ShareGPT
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- Nitral-AI/RP_Alignment-ShareGPT
- Chaser-cz/sonnet35-charcard-roleplay-sharegpt
- AiCloser/sharegpt_cot_dataset
- PJMixers/Gryphe_Opus-WritingPrompts-Story2Prompt-ShareGPT
- priveeai/pippa_sharegpt
- Locutusque/sharegpt_gpt4_uncensored_cleaned
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
- FreedomIntelligence/medical-o1-reasoning-SFT
- teknium/trismegistus-project
library_name: transformers
tags:
- mergekit
- merge
- nsfw
- rp
- 1b
- llama
- roleplay
- creative
- erotic
- friend
- girlfriend
- perturbations
- llama-cpp
language:
- en
- es
---

# Novaciano/Luciferian_Cultist-3.2-1B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Novaciano/Luciferian_Cultist-3.2-1B`](https://huggingface.co/Novaciano/Luciferian_Cultist-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Novaciano/Luciferian_Cultist-3.2-1B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Novaciano/Luciferian_Cultist-3.2-1B-Q4_K_M-GGUF --hf-file luciferian_cultist-3.2-1b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Novaciano/Luciferian_Cultist-3.2-1B-Q4_K_M-GGUF --hf-file luciferian_cultist-3.2-1b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Novaciano/Luciferian_Cultist-3.2-1B-Q4_K_M-GGUF --hf-file luciferian_cultist-3.2-1b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Novaciano/Luciferian_Cultist-3.2-1B-Q4_K_M-GGUF --hf-file luciferian_cultist-3.2-1b-q4_k_m.gguf -c 2048
```
|
Novaciano/Luciferian_Cultist-3.2-1B
|
Novaciano
| 2025-09-20T04:50:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"nsfw",
"rp",
"1b",
"roleplay",
"creative",
"erotic",
"friend",
"girlfriend",
"perturbations",
"llama-cpp",
"conversational",
"en",
"es",
"dataset:marcuscedricridia/unAIthical-ShareGPT-deepclean-sharegpt",
"dataset:WasamiKirua/Her-Samantha-Style",
"dataset:HuggingFaceTB/smoltalk",
"dataset:Guilherme34/uncensor",
"dataset:teknium/OpenHermes-2.5",
"dataset:passing2961/multifaceted-skill-of-mind",
"dataset:PawanKrd/math-gpt-4o-200k",
"dataset:V3N0M/Jenna-50K-Alpaca-Uncensored",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:mlabonne/FineTome-100k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:CarrotAI/ko-instruction-dataset",
"dataset:Salesforce/xlam-function-calling-60k",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:anthracite-org/stheno-filtered-v1.1",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:AiAF/SCPWiki-Archive-02-March-2025-Datasets",
"dataset:huihui-ai/QWQ-LONGCOT-500K",
"dataset:huihui-ai/LONGCOT-Refine-500K",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:alexandreteles/AlpacaToxicQA_ShareGPT",
"dataset:Nitral-AI/Active_RP-ShareGPT",
"dataset:PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT",
"dataset:Nitral-AI/RP_Alignment-ShareGPT",
"dataset:Chaser-cz/sonnet35-charcard-roleplay-sharegpt",
"dataset:AiCloser/sharegpt_cot_dataset",
"dataset:PJMixers/Gryphe_Opus-WritingPrompts-Story2Prompt-ShareGPT",
"dataset:priveeai/pippa_sharegpt",
"dataset:Locutusque/sharegpt_gpt4_uncensored_cleaned",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:AI-MO/NuminaMath-TIR",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:cognitivecomputations/samantha-data",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"dataset:teknium/trismegistus-project",
"base_model:Novaciano/LUCIFER-3.2-1B",
"base_model:merge:Novaciano/LUCIFER-3.2-1B",
"base_model:jtatman/llama-3.2-1b-lewd-mental-occult",
"base_model:merge:jtatman/llama-3.2-1b-lewd-mental-occult",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T04:31:10Z |
---
base_model:
- Novaciano/LUCIFER-3.2-1B
- jtatman/llama-3.2-1b-lewd-mental-occult
datasets:
- marcuscedricridia/unAIthical-ShareGPT-deepclean-sharegpt
- WasamiKirua/Her-Samantha-Style
- HuggingFaceTB/smoltalk
- Guilherme34/uncensor
- teknium/OpenHermes-2.5
- passing2961/multifaceted-skill-of-mind
- PawanKrd/math-gpt-4o-200k
- V3N0M/Jenna-50K-Alpaca-Uncensored
- cognitivecomputations/dolphin-coder
- mlabonne/FineTome-100k
- microsoft/orca-math-word-problems-200k
- CarrotAI/ko-instruction-dataset
- Salesforce/xlam-function-calling-60k
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/stheno-filtered-v1.1
- anthracite-org/nopm_claude_writing_fixed
- AiAF/SCPWiki-Archive-02-March-2025-Datasets
- huihui-ai/QWQ-LONGCOT-500K
- huihui-ai/LONGCOT-Refine-500K
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- alexandreteles/AlpacaToxicQA_ShareGPT
- Nitral-AI/Active_RP-ShareGPT
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- Nitral-AI/RP_Alignment-ShareGPT
- Chaser-cz/sonnet35-charcard-roleplay-sharegpt
- AiCloser/sharegpt_cot_dataset
- PJMixers/Gryphe_Opus-WritingPrompts-Story2Prompt-ShareGPT
- priveeai/pippa_sharegpt
- Locutusque/sharegpt_gpt4_uncensored_cleaned
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
- FreedomIntelligence/medical-o1-reasoning-SFT
- teknium/trismegistus-project
library_name: transformers
tags:
- mergekit
- merge
- nsfw
- rp
- 1b
- llama
- roleplay
- creative
- erotic
- friend
- girlfriend
- perturbations
- llama-cpp
language:
- en
- es
---

# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
This model have the teknium/trismegistus-project dataset.
### Merge Method
This model was merged using the [Arcee Fusion](https://arcee.ai) merge method using [Novaciano/LUCIFER-3.2-1B](https://huggingface.co/Novaciano/LUCIFER-3.2-1B) as a base.
### Models Merged
The following models were included in the merge:
* [jtatman/llama-3.2-1b-lewd-mental-occult](https://huggingface.co/jtatman/llama-3.2-1b-lewd-mental-occult)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float32
out_dtype: bfloat16
merge_method: arcee_fusion
base_model: Novaciano/LUCIFER-3.2-1B
models:
- model: Novaciano/LUCIFER-3.2-1B
parameters:
weight:
- filter: mlp
value: [1, 2]
- value: 1
- model: jtatman/llama-3.2-1b-lewd-mental-occult
parameters:
weight:
- filter: lm_head
value: 1
- value: [1, 0.5]
```
|
AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt-sft
|
AmberYifan
| 2025-09-20T04:44:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt",
"base_model:finetune:AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T04:40:03Z |
---
library_name: transformers
license: apache-2.0
base_model: AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt-sft
This model is a fine-tuned version of [AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt](https://huggingface.co/AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-low-tweet-1m-en-gpt) on the alpaca_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Teto59/gpt2-finetuned-ja
|
Teto59
| 2025-09-20T04:44:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"ja",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T04:44:01Z |
---
library_name: transformers
pipeline_tag: text-generation
language:
- ja
license: mit
---
# Teto59/gpt2-finetuned-ja
Google Colabで学習した日本語向けGPT-2系のファインチューニングモデルです。
## 使い方
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("Teto59/gpt2-finetuned-ja")
mdl = AutoModelForCausalLM.from_pretrained("Teto59/gpt2-finetuned-ja")
out = mdl.generate(**tok("こんにちは", return_tensors="pt"), max_new_tokens=30)
print(tok.decode(out[0], skip_special_tokens=True))
```
## 注意・メモ
- 学習データの概要、想定用途、制限事項、既知のリスクなどを追記してください。
- 主要ライブラリのバージョン(transformers / torch など)も書くと親切です。
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758343204
|
schooncestiaa
| 2025-09-20T04:41:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T04:41:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qnaug/gemma-3-4b-med
|
qnaug
| 2025-09-20T04:41:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-20T04:37:58Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** qnaug
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nilayan87/ocean_hazard_onnx
|
Nilayan87
| 2025-09-20T04:36:42Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2025-09-19T17:30:38Z |
# 🌊 Ocean Hazard Detection (Quantized ONNX)
This repository contains a **quantized ONNX version** of the Ocean Hazard Detection model, optimized for faster inference and lower memory usage.
The model can classify social media posts into:
- **Hazard Report**
- **Non-Hazard**
---
## 📂 Files
- `model_quantized.onnx` : Quantized ONNX model
- `config.json` : Model configuration
- `tokenizer.json`, `spiece.model` : Tokenizer files
- `special_tokens_map.json`, `tokenizer_config.json` : Tokenizer configs
- `ort_config.json` : ONNX Runtime configuration
---
## 🚀 Usage Example
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer
import torch
# Load model + tokenizer
model = ORTModelForSequenceClassification.from_pretrained("Nilayan87/ocean_hazard_onnx")
tokenizer = AutoTokenizer.from_pretrained("Nilayan87/ocean_hazard_onnx")
# Example input
text = "Cyclone alert near Vizag coast ⚠️ stay safe!"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
# Prediction
pred = torch.argmax(outputs.logits, dim=1).item()
print("Prediction:", pred)
|
akritidhasmana/wav2vec2-large-xls-r-300m-gh-colab
|
akritidhasmana
| 2025-09-20T04:35:20Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-20T03:15:26Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-gh-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-gh-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5541
- Wer: 0.7353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 4.6944 | 7.1429 | 400 | 1.7995 | 0.9534 |
| 0.9824 | 14.2857 | 800 | 1.2222 | 0.7826 |
| 0.3369 | 21.4286 | 1200 | 1.4667 | 0.7472 |
| 0.171 | 28.5714 | 1600 | 1.5541 | 0.7353 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt-sft
|
AmberYifan
| 2025-09-20T04:33:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt",
"base_model:finetune:AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T04:29:00Z |
---
library_name: transformers
license: apache-2.0
base_model: AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-0.5b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-0.5b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt-sft
This model is a fine-tuned version of [AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt](https://huggingface.co/AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt) on the alpaca_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ybkim95/gemma-7b-it-rl
|
ybkim95
| 2025-09-20T04:32:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T04:30:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ravan18/AdviceModel-Qwen3-T4
|
ravan18
| 2025-09-20T04:32:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"region:us"
] |
text-generation
| 2025-09-20T04:32:17Z |
---
base_model: Qwen/Qwen3-4B-Instruct-2507
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:Qwen/Qwen3-4B-Instruct-2507
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-junk-tweet-1m-en-gpt-sft
|
AmberYifan
| 2025-09-20T04:27:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-junk-tweet-1m-en-gpt",
"base_model:finetune:AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-junk-tweet-1m-en-gpt",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T04:01:57Z |
---
library_name: transformers
license: apache-2.0
base_model: AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-junk-tweet-1m-en-gpt
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-0.5b-instruct-full-pretrain-junk-tweet-1m-en-gpt-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-0.5b-instruct-full-pretrain-junk-tweet-1m-en-gpt-sft
This model is a fine-tuned version of [AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-junk-tweet-1m-en-gpt](https://huggingface.co/AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-junk-tweet-1m-en-gpt) on the alpaca_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
huseyinatahaninan/C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1-SFT-Llama-3.1-8B-Instruct
|
huseyinatahaninan
| 2025-09-20T04:27:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T18:31:03Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1-SFT-Llama-3.1-8B-Instruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1-SFT-Llama-3.1-8B-Instruct
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the C2_re_100k_tag5_cleaned_hermes_toolv6_dethink_replacedv1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3831 | 0.0384 | 100 | 0.4022 |
| 0.3336 | 0.0769 | 200 | 0.3609 |
| 0.3028 | 0.1153 | 300 | 0.3473 |
| 0.3291 | 0.1538 | 400 | 0.3362 |
| 0.3386 | 0.1922 | 500 | 0.3264 |
| 0.3343 | 0.2306 | 600 | 0.3207 |
| 0.338 | 0.2691 | 700 | 0.3146 |
| 0.2933 | 0.3075 | 800 | 0.3126 |
| 0.291 | 0.3460 | 900 | 0.3104 |
| 0.3136 | 0.3844 | 1000 | 0.3042 |
| 0.2909 | 0.4228 | 1100 | 0.3012 |
| 0.315 | 0.4613 | 1200 | 0.2991 |
| 0.2839 | 0.4997 | 1300 | 0.2951 |
| 0.282 | 0.5382 | 1400 | 0.2936 |
| 0.2637 | 0.5766 | 1500 | 0.2919 |
| 0.26 | 0.6150 | 1600 | 0.2899 |
| 0.2857 | 0.6535 | 1700 | 0.2868 |
| 0.2769 | 0.6919 | 1800 | 0.2853 |
| 0.2644 | 0.7303 | 1900 | 0.2837 |
| 0.257 | 0.7688 | 2000 | 0.2824 |
| 0.2772 | 0.8072 | 2100 | 0.2818 |
| 0.2617 | 0.8457 | 2200 | 0.2806 |
| 0.2714 | 0.8841 | 2300 | 0.2795 |
| 0.2623 | 0.9225 | 2400 | 0.2793 |
| 0.2731 | 0.9610 | 2500 | 0.2792 |
| 0.2654 | 0.9994 | 2600 | 0.2790 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
zennyx/distilhubert-finetuned-gtzan
|
zennyx
| 2025-09-20T04:25:53Z | 22 | 0 | null |
[
"pytorch",
"hubert",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:sanchit-gandhi/distilhubert-finetuned-gtzan",
"base_model:finetune:sanchit-gandhi/distilhubert-finetuned-gtzan",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-09-13T18:44:01Z |
---
license: apache-2.0
base_model: sanchit-gandhi/distilhubert-finetuned-gtzan
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [sanchit-gandhi/distilhubert-finetuned-gtzan](https://huggingface.co/sanchit-gandhi/distilhubert-finetuned-gtzan) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1780
- Accuracy: 0.2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2933 | 0.92 | 6 | 2.2317 | 0.2 |
| 2.2574 | 2.0 | 13 | 2.2160 | 0.2 |
| 2.2 | 2.92 | 19 | 2.1979 | 0.2 |
| 2.1631 | 4.0 | 26 | 2.1831 | 0.2 |
| 2.151 | 4.62 | 30 | 2.1780 | 0.2 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.8.0+cu128
- Datasets 3.4.1
- Tokenizers 0.13.3
|
luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-0.002-v3_1064
|
luckeciano
| 2025-09-20T04:22:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T00:58:12Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-0.002-v3_1064
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-0.002-v3_1064
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-2Iterations-0.002-v3_1064", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/ebf8ciem)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Liang0223/Qwen2.5-VL-3B-Instruct-DFT-1e-6-256
|
Liang0223
| 2025-09-20T04:21:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-20T04:13:35Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-3B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: dft-1e-6-256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dft-1e-6-256
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on the r1-onevision dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.8.0+cu128
- Datasets 3.2.0
- Tokenizers 0.21.0
|
hai2131/sailor2-stage2-augment
|
hai2131
| 2025-09-20T04:19:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:hai2131/sailor2-cpt-sft",
"base_model:adapter:hai2131/sailor2-cpt-sft",
"region:us"
] | null | 2025-09-19T16:58:15Z |
---
base_model: hai2131/sailor2-cpt-sft
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
SwetaJena/llama-3.2-3B-elephant_numbers_student_14_v1
|
SwetaJena
| 2025-09-20T04:19:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T04:19:09Z |
---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SwetaJena
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
inlinwei/Qwen3-0.6B-Gensyn-Swarm-smooth_rapid_leopard
|
inlinwei
| 2025-09-20T04:10:51Z | 149 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am smooth_rapid_leopard",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-24T14:52:33Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am smooth_rapid_leopard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Akhilapriya/finetuned-gemma-2b-code-instruct
|
Akhilapriya
| 2025-09-20T04:06:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T04:06:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Miggsoo3/MadelisVale-SD15
|
Miggsoo3
| 2025-09-20T04:01:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-20T03:09:04Z |
---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of miggsoo3 woman
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - miggsoo3/MadelisVale-LoRA
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of miggsoo3 woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-control-tweet-1m-en-gpt-sft
|
AmberYifan
| 2025-09-20T04:01:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-control-tweet-1m-en-gpt",
"base_model:finetune:AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-control-tweet-1m-en-gpt",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T03:56:30Z |
---
library_name: transformers
license: apache-2.0
base_model: AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-control-tweet-1m-en-gpt
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-0.5b-instruct-full-pretrain-control-tweet-1m-en-gpt-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-0.5b-instruct-full-pretrain-control-tweet-1m-en-gpt-sft
This model is a fine-tuned version of [AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-control-tweet-1m-en-gpt](https://huggingface.co/AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-control-tweet-1m-en-gpt) on the alpaca_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
twelvehertz/open-o3-sft-13
|
twelvehertz
| 2025-09-20T04:00:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-14B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"region:us"
] |
text-generation
| 2025-09-20T04:00:06Z |
---
base_model: unsloth/Qwen2.5-14B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen2.5-14B-Instruct
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758340741
|
schooncestiaa
| 2025-09-20T04:00:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T04:00:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hitoshura25/webauthn-security-sequential_20250919_223249_stage1_analysis
|
hitoshura25
| 2025-09-20T03:55:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"security",
"vulnerability-analysis",
"webauthn",
"mlx-converted",
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T03:55:29Z |
---
base_model: allenai/OLMo-2-1B
base_model_relation: adapter
library_name: peft
peft_type: LORA
tags:
- security
- vulnerability-analysis
- webauthn
- mlx-converted
license: apache-2.0
---
# WebAuthn Security LoRA Adapter
This LoRA adapter specializes the base model for WebAuthn security vulnerability analysis.
**Converted from MLX format to HuggingFace PEFT format for compatibility.**
## Model Details
- **Base Model**: allenai/OLMo-2-1B
- **Adapter Type**: LoRA (Low-Rank Adaptation)
- **Target Modules**: q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj
- **LoRA Rank**: 8
- **LoRA Alpha**: 20.0
- **LoRA Dropout**: 0.0
## Training Details
- **Training Framework**: MLX-LM (converted to PEFT format)
- **Training Data**: WebAuthn security vulnerabilities
- **Iterations**: 500
- **Learning Rate**: 5e-06
- **Optimizer**: adamw
- **Fine-tune Type**: lora
## Usage
Load this adapter with the PEFT library:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load configuration and model
config = PeftConfig.from_pretrained("path/to/this/adapter")
base_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, "path/to/this/adapter")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Use for inference
inputs = tokenizer("Analyze this WebAuthn vulnerability:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Conversion Notes
This adapter was originally trained using MLX-LM and converted to HuggingFace PEFT format using an evidence-based conversion pipeline that:
1. Converts MLX parameter naming (`lora_a/lora_b`) to PEFT format (`lora_A.weight/lora_B.weight`)
2. Adds proper `base_model.model.` prefixes to parameter names
3. Generates PEFT-compatible configuration with required fields
4. Maintains full compatibility with HuggingFace ecosystem
## Performance
This adapter enhances the base model's capability for:
- WebAuthn security vulnerability analysis
- Code fix generation for security issues
- Security-aware code recommendations
## License
Apache 2.0
|
MananSuri27/Qwen2.5-3B-Instruct-GRPO-When2Call2
|
MananSuri27
| 2025-09-20T03:54:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T03:54:21Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MananSuri27
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758340124
|
schooncestiaa
| 2025-09-20T03:50:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T03:49:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dxy127/ppo-Huggy
|
dxy127
| 2025-09-20T03:50:05Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-09-20T03:50:01Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dxy127/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vcollos/paula_wan
|
vcollos
| 2025-09-20T03:48:02Z | 0 | 0 | null |
[
"text-to-video",
"pt",
"base_model:Wan-AI/Wan2.1-T2V-14B",
"base_model:finetune:Wan-AI/Wan2.1-T2V-14B",
"license:mit",
"region:us"
] |
text-to-video
| 2025-09-20T03:19:29Z |
---
license: mit
language:
- pt
base_model:
- Wan-AI/Wan2.1-T2V-14B
pipeline_tag: text-to-video
---
|
Miggsoo3/MadelisVale-LoRA
|
Miggsoo3
| 2025-09-20T03:44:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-09-15T02:04:55Z |
---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of miggsoo3 woman
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - miggsoo3/MadelisVale-LoRA
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of miggsoo3 woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
hafidhsoekma/unsloth-Qwen3-8B-unsloth-bnb-4bit-method_ORPO
|
hafidhsoekma
| 2025-09-20T03:43:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T03:27:25Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hafidhsoekma
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Satwik19/hurry
|
Satwik19
| 2025-09-20T03:42:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T03:41:30Z |
---
license: apache-2.0
---
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758339510
|
schooncestiaa
| 2025-09-20T03:39:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T03:39:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
moyixiao/Qwen3-0.6B-bnpo-f16-300
|
moyixiao
| 2025-09-20T03:33:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T03:33:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dc-4nderson/test-score-predictor
|
Dc-4nderson
| 2025-09-20T03:33:05Z | 0 | 0 | null |
[
"joblib",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2025-09-20T03:30:34Z |
---
language:
- en
pipeline_tag: text-classification
---
|
Jnakkash/Test
|
Jnakkash
| 2025-09-20T03:32:46Z | 0 | 0 | null |
[
"base_model:Qwen/Qwen3-Next-80B-A3B-Thinking",
"base_model:finetune:Qwen/Qwen3-Next-80B-A3B-Thinking",
"region:us"
] | null | 2025-09-19T23:05:37Z |
---
base_model:
- Qwen/Qwen3-Next-80B-A3B-Thinking
---
|
NMPHS/SMS
|
NMPHS
| 2025-09-20T03:32:41Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T03:32:41Z |
---
license: apache-2.0
---
|
djd0723/Qwen3-Embedding-8B-Q8_0-GGUF
|
djd0723
| 2025-09-20T03:28:56Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"transformers",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"llama-cpp",
"gguf-my-repo",
"base_model:Qwen/Qwen3-Embedding-8B",
"base_model:quantized:Qwen/Qwen3-Embedding-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
feature-extraction
| 2025-09-20T03:28:21Z |
---
license: apache-2.0
base_model: Qwen/Qwen3-Embedding-8B
tags:
- transformers
- sentence-transformers
- sentence-similarity
- feature-extraction
- text-embeddings-inference
- llama-cpp
- gguf-my-repo
---
# djd0723/Qwen3-Embedding-8B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-Embedding-8B`](https://huggingface.co/Qwen/Qwen3-Embedding-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-Embedding-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo djd0723/Qwen3-Embedding-8B-Q8_0-GGUF --hf-file qwen3-embedding-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo djd0723/Qwen3-Embedding-8B-Q8_0-GGUF --hf-file qwen3-embedding-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo djd0723/Qwen3-Embedding-8B-Q8_0-GGUF --hf-file qwen3-embedding-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo djd0723/Qwen3-Embedding-8B-Q8_0-GGUF --hf-file qwen3-embedding-8b-q8_0.gguf -c 2048
```
|
twelvehertz/open-o3-sft-12
|
twelvehertz
| 2025-09-20T03:20:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-14B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"region:us"
] |
text-generation
| 2025-09-20T03:20:36Z |
---
base_model: unsloth/Qwen2.5-14B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen2.5-14B-Instruct
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
MuQYY/a2c-PandaReachDense-v3
|
MuQYY
| 2025-09-20T03:14:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-20T03:12:10Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jhsu12/adv_nlp_hw1
|
jhsu12
| 2025-09-20T03:12:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-20T03:12:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yalhessi/small-4e4
|
yalhessi
| 2025-09-20T03:12:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-base",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-base",
"license:other",
"region:us"
] | null | 2025-09-20T03:12:04Z |
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-base
tags:
- generated_from_trainer
model-index:
- name: small-4e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-4e4
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:-----:|:---------------:|
| 0.5879 | 0.2001 | 720 | 0.1971 |
| 0.3956 | 0.4002 | 1440 | 0.1700 |
| 0.3301 | 0.6003 | 2160 | 0.1580 |
| 0.3136 | 0.8003 | 2880 | 0.1479 |
| 0.2909 | 1.0003 | 3600 | 0.1407 |
| 0.2581 | 1.2004 | 4320 | 0.1371 |
| 0.25 | 1.4004 | 5040 | 0.1318 |
| 0.2472 | 1.6005 | 5760 | 0.1324 |
| 0.2456 | 1.8006 | 6480 | 0.1266 |
| 0.2376 | 2.0006 | 7200 | 0.1232 |
| 0.2128 | 2.2006 | 7920 | 0.1262 |
| 0.2067 | 2.4007 | 8640 | 0.1207 |
| 0.2026 | 2.6008 | 9360 | 0.1207 |
| 0.2016 | 2.8009 | 10080 | 0.1171 |
| 0.2031 | 3.0008 | 10800 | 0.1149 |
| 0.1711 | 3.2009 | 11520 | 0.1167 |
| 0.1758 | 3.4010 | 12240 | 0.1127 |
| 0.1743 | 3.6011 | 12960 | 0.1138 |
| 0.1728 | 3.8012 | 13680 | 0.1133 |
| 0.1732 | 4.0011 | 14400 | 0.1100 |
| 0.1464 | 4.2012 | 15120 | 0.1110 |
| 0.1491 | 4.4013 | 15840 | 0.1109 |
| 0.148 | 4.6014 | 16560 | 0.1095 |
| 0.1504 | 4.8014 | 17280 | 0.1072 |
| 0.1465 | 5.0014 | 18000 | 0.1076 |
| 0.1239 | 5.2015 | 18720 | 0.1118 |
| 0.1267 | 5.4016 | 19440 | 0.1111 |
| 0.1289 | 5.6016 | 20160 | 0.1070 |
| 0.1315 | 5.8017 | 20880 | 0.1080 |
| 0.1269 | 6.0017 | 21600 | 0.1057 |
| 0.1103 | 6.2018 | 22320 | 0.1098 |
| 0.1101 | 6.4018 | 23040 | 0.1113 |
| 0.1111 | 6.6019 | 23760 | 0.1092 |
| 0.1112 | 6.8020 | 24480 | 0.1077 |
| 0.1112 | 7.0019 | 25200 | 0.1076 |
| 0.0966 | 7.2020 | 25920 | 0.1135 |
| 0.0933 | 7.4021 | 26640 | 0.1152 |
| 0.0948 | 7.6022 | 27360 | 0.1155 |
| 0.094 | 7.8023 | 28080 | 0.1074 |
| 0.0948 | 8.0022 | 28800 | 0.1102 |
| 0.0789 | 8.2023 | 29520 | 0.1151 |
| 0.0772 | 8.4024 | 30240 | 0.1126 |
| 0.0782 | 8.6025 | 30960 | 0.1149 |
| 0.0802 | 8.8026 | 31680 | 0.1156 |
| 0.079 | 9.0025 | 32400 | 0.1141 |
| 0.0646 | 9.2026 | 33120 | 0.1249 |
| 0.0663 | 9.4027 | 33840 | 0.1176 |
| 0.0671 | 9.6028 | 34560 | 0.1227 |
| 0.0689 | 9.8028 | 35280 | 0.1210 |
| 0.0659 | 10.0028 | 36000 | 0.1210 |
| 0.0561 | 10.2029 | 36720 | 0.1304 |
| 0.0556 | 10.4029 | 37440 | 0.1302 |
| 0.0568 | 10.6030 | 38160 | 0.1321 |
| 0.0564 | 10.8031 | 38880 | 0.1299 |
| 0.0581 | 11.0031 | 39600 | 0.1314 |
| 0.0505 | 11.2031 | 40320 | 0.1403 |
| 0.0494 | 11.4032 | 41040 | 0.1426 |
| 0.0495 | 11.6033 | 41760 | 0.1414 |
| 0.0498 | 11.8034 | 42480 | 0.1423 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.1
|
furkanbsk/smolpolicynext1
|
furkanbsk
| 2025-09-20T03:09:57Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:furkanbsk/revel_merged_1234",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-20T03:09:46Z |
---
base_model: lerobot/smolvla_base
datasets: furkanbsk/revel_merged_1234
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- lerobot
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758337663
|
schooncestiaa
| 2025-09-20T03:09:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T03:08:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hitoshura25/webauthn-security-sequential_20250919_212245_stage2_codefix
|
hitoshura25
| 2025-09-20T03:04:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"security",
"vulnerability-analysis",
"webauthn",
"mlx-converted",
"license:apache-2.0",
"region:us"
] | null | 2025-09-20T03:04:37Z |
---
base_model: allenai/OLMo-2-1B
base_model_relation: adapter
library_name: peft
peft_type: LORA
tags:
- security
- vulnerability-analysis
- webauthn
- mlx-converted
license: apache-2.0
---
# WebAuthn Security LoRA Adapter
This LoRA adapter specializes the base model for WebAuthn security vulnerability analysis.
**Converted from MLX format to HuggingFace PEFT format for compatibility.**
## Model Details
- **Base Model**: allenai/OLMo-2-1B
- **Adapter Type**: LoRA (Low-Rank Adaptation)
- **Target Modules**: q_proj, v_proj, k_proj, o_proj, gate_proj, up_proj, down_proj
- **LoRA Rank**: 8
- **LoRA Alpha**: 20.0
- **LoRA Dropout**: 0.0
## Training Details
- **Training Framework**: MLX-LM (converted to PEFT format)
- **Training Data**: WebAuthn security vulnerabilities
- **Iterations**: 800
- **Learning Rate**: 1e-06
- **Optimizer**: adamw
- **Fine-tune Type**: lora
## Usage
Load this adapter with the PEFT library:
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load configuration and model
config = PeftConfig.from_pretrained("path/to/this/adapter")
base_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(base_model, "path/to/this/adapter")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Use for inference
inputs = tokenizer("Analyze this WebAuthn vulnerability:", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Conversion Notes
This adapter was originally trained using MLX-LM and converted to HuggingFace PEFT format using an evidence-based conversion pipeline that:
1. Converts MLX parameter naming (`lora_a/lora_b`) to PEFT format (`lora_A.weight/lora_B.weight`)
2. Adds proper `base_model.model.` prefixes to parameter names
3. Generates PEFT-compatible configuration with required fields
4. Maintains full compatibility with HuggingFace ecosystem
## Performance
This adapter enhances the base model's capability for:
- WebAuthn security vulnerability analysis
- Code fix generation for security issues
- Security-aware code recommendations
## License
Apache 2.0
|
bustamiyusoef/DALPA_CH
|
bustamiyusoef
| 2025-09-20T03:03:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"base_model:facebook/nougat-base",
"base_model:finetune:facebook/nougat-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T03:02:17Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nougat-base
tags:
- generated_from_trainer
model-index:
- name: DALPA_CH
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DALPA_CH
This model is a fine-tuned version of [facebook/nougat-base](https://huggingface.co/facebook/nougat-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 48
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 16.1334 | 1.0 | 184 | 2.5975 |
| 14.0554 | 2.0 | 368 | 2.5999 |
| 12.6275 | 3.0 | 552 | 2.1060 |
| 11.6503 | 4.0 | 736 | 2.0364 |
| 9.9576 | 5.0 | 920 | 2.0641 |
| 10.3715 | 6.0 | 1104 | 1.9065 |
| 10.0018 | 7.0 | 1288 | 1.8980 |
| 9.7198 | 8.0 | 1472 | 1.9307 |
| 9.4567 | 9.0 | 1656 | 1.8524 |
| 8.6731 | 10.0 | 1840 | 1.8905 |
| 9.0232 | 11.0 | 2024 | 1.8628 |
| 9.1263 | 12.0 | 2208 | 1.8725 |
| 9.0702 | 13.0 | 2392 | 1.8611 |
| 8.9982 | 14.0 | 2576 | 1.8525 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 4.1.1
- Tokenizers 0.21.0
|
sivakrishna123/my-jarvis-4bit-GGUF
|
sivakrishna123
| 2025-09-20T02:59:42Z | 3,715 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T14:11:24Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sivakrishna123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758337047
|
schooncestiaa
| 2025-09-20T02:58:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T02:58:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TAUR-dev/M-0918__bon_tuning_correct_samples_3args_grpo-rl
|
TAUR-dev
| 2025-09-20T02:45:45Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"license:mit",
"region:us"
] | null | 2025-09-19T19:11:04Z |
---
language: en
license: mit
---
# M-0918__bon_tuning_correct_samples_3args_grpo-rl
## Model Details
- **Training Method**: VeRL Reinforcement Learning (RL)
- **Stage Name**: rl
- **Experiment**: 0918__bon_tuning_correct_samples_3args_grpo
- **RL Framework**: VeRL (Versatile Reinforcement Learning)
## Training Configuration
## Experiment Tracking
🔗 **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__0918__bon_tuning_correct_samples_3args_grpo__v1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-0918__bon_tuning_correct_samples_3args_grpo-rl")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-0918__bon_tuning_correct_samples_3args_grpo-rl")
```
|
seraphimzzzz/1054824
|
seraphimzzzz
| 2025-09-20T02:45:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:45:38Z |
[View on Civ Archive](https://civarchive.com/models/944481?modelVersionId=1149826)
|
crystalline7/1635959
|
crystalline7
| 2025-09-20T02:45:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:45:21Z |
[View on Civ Archive](https://civarchive.com/models/1533745?modelVersionId=1735362)
|
ultratopaz/693257
|
ultratopaz
| 2025-09-20T02:45:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:45:01Z |
[View on Civ Archive](https://civarchive.com/models/157145?modelVersionId=779931)
|
ultratopaz/127682
|
ultratopaz
| 2025-09-20T02:44:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T02:44:49Z |
[View on Civ Archive](https://civarchive.com/models/150518?modelVersionId=168218)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.