modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 00:45:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 491
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 00:44:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Arsen008/wav2vec2-large-mms-1b-turkish-colab | Arsen008 | 2025-06-16T08:03:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-06-10T09:42:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
srivihari/resume-job-role-classifier | srivihari | 2025-06-16T08:03:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"resume",
"job-role-classification",
"en",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-15T17:33:43Z | ---
license: apache-2.0
tags:
- text-classification
- resume
- job-role-classification
- transformers
- distilbert
datasets:
- custom
language:
- en
pipeline_tag: text-classification
---
# ๐ Resume Job Role Classifier
A fine-tuned [`DistilBERT`](https://huggingface.co/distilbert-base-uncased) model to classify job roles based on resume content. This model is trained to predict the most likely profession from the given resume text, supporting over 14 different job categories.
---
## ๐ง Model Details
- **Architecture**: DistilBERT (base, uncased)
- **Task**: Multi-class text classification
- **Input**: Raw resume text (English)
- **Output**: Predicted job category label and score
---
## ๐ Labels Covered
This model supports classification into the following job categories:
- Data Science
- Java Developer
- Web Designing
- HR
- Mechanical Engineer
- Electrical Engineering
- Civil Engineer
- Arts
- Advocate
- Sales
- Health and fitness
- Business Analyst
- SAP Developer
- Automation Testing
---
## ๐๏ธ Training
- **Dataset**: Custom dataset containing labeled resumes
- **Split**: 80% train / 20% test
- **Metrics**:
- Accuracy: 99โ100%
- F1 Score: ~0.99โ1.00 (macro avg)
- **Epochs**: 3
- **Batch size**: 8
- **Optimizer**: AdamW
---
## ๐ฅ How to Use
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="srivihari/resume-job-role-classifier")
result = classifier("Experienced data scientist with Python, machine learning, and statistics background.")
print(result)
# Example output:
# [{'label': 'Data Science', 'score': 0.97}]
#Note: If you face any issues like token_type_ids errors, make sure to adjust tokenizer config as below:
#from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
#model = AutoModelForSequenceClassification.from_pretrained("srivihari/resume-job-role-classifier")
#tokenizer = AutoTokenizer.from_pretrained("srivihari/resume-job-role-classifier")
#tokenizer.model_input_names = ["input_ids", "attention_mask"]
#classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
|
aieng-lab/codebert-base_review-aspect | aieng-lab | 2025-06-16T08:00:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:microsoft/codebert-base",
"base_model:finetune:microsoft/codebert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T08:00:20Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- microsoft/codebert-base
pipeline_tag: text-classification
---
# CodeBERT base for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
sommerzen/EuroMoE-German-experiment | sommerzen | 2025-06-16T07:59:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mixtral",
"trl",
"en",
"base_model:utter-project/EuroMoE-2.6B-A0.6B-Instruct-Preview",
"base_model:finetune:utter-project/EuroMoE-2.6B-A0.6B-Instruct-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:59:03Z | ---
base_model: utter-project/EuroMoE-2.6B-A0.6B-Instruct-Preview
tags:
- text-generation-inference
- transformers
- unsloth
- mixtral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sommerzen
- **License:** apache-2.0
- **Finetuned from model :** utter-project/EuroMoE-2.6B-A0.6B-Instruct-Preview
This mixtral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SidXXD/eps8-gaussian-f_26 | SidXXD | 2025-06-16T07:58:38Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-06-16T07:53:43Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/eps8-gaussian-f_26
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a sks person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
phospho-app/yeva11-ACT_BBOX-lerobot_kirby_01-51ib3 | phospho-app | 2025-06-16T07:58:29Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-16T07:36:35Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/lerobot_kirby_01_bboxes](https://huggingface.co/datasets/phospho-app/lerobot_kirby_01_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
IkePy/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wary_tropical_dove | IkePy | 2025-06-16T07:58:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am wary tropical dove",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-15T14:52:52Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wary_tropical_dove
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am wary tropical dove
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wary_tropical_dove
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="IkePy/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-wary_tropical_dove", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
aieng-lab/t5-3b_review-aspect | aieng-lab | 2025-06-16T07:58:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"en",
"base_model:google-t5/t5-3b",
"base_model:finetune:google-t5/t5-3b",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:56:21Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- t5-3b
pipeline_tag: text-classification
---
# T5 3b for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [t5-3b](https://huggingface.co/t5-3b)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Nerva1228/liupopo | Nerva1228 | 2025-06-16T07:56:49Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T07:56:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: liupopo
---
# Liupopo
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `liupopo` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "liupopo",
"lora_weights": "https://huggingface.co/Nerva1228/liupopo/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/liupopo', weight_name='lora.safetensors')
image = pipeline('liupopo').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 5e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/liupopo/discussions) to add images that show off what youโve made with this LoRA.
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.15_epoch2 | MinaMila | 2025-06-16T07:56:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:54:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
soy3on/Qwen3_Rude_RAG | soy3on | 2025-06-16T07:55:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:55:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JayHyeon/Qwen_1.5B-math-VIPO_5e-7_3.0vpo_constant-1ep | JayHyeon | 2025-06-16T07:54:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:argilla/distilabel-math-preference-dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T05:29:43Z | ---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: argilla/distilabel-math-preference-dpo
library_name: transformers
model_name: Qwen_1.5B-math-VIPO_5e-7_3.0vpo_constant-1ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_1.5B-math-VIPO_5e-7_3.0vpo_constant-1ep
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_1.5B-math-VIPO_5e-7_3.0vpo_constant-1ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/0fnh12lt)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
aitechleart/Andrij | aitechleart | 2025-06-16T07:54:13Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-06-16T07:11:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
SidXXD/eps8-noise_upscaling-f_26 | SidXXD | 2025-06-16T07:53:35Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-06-16T07:47:56Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/eps8-noise_upscaling-f_26
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a sks person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
tscstudios/efphy0itvsbuqapuj0ui31j6gff3_24705bf4-0112-49d1-ab57-4a053ed9a09b | tscstudios | 2025-06-16T07:53:11Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T07:53:09Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Efphy0Itvsbuqapuj0Ui31J6Gff3_24705Bf4 0112 49D1 Ab57 4A053Ed9A09B
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/efphy0itvsbuqapuj0ui31j6gff3_24705bf4-0112-49d1-ab57-4a053ed9a09b/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/efphy0itvsbuqapuj0ui31j6gff3_24705bf4-0112-49d1-ab57-4a053ed9a09b', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/efphy0itvsbuqapuj0ui31j6gff3_24705bf4-0112-49d1-ab57-4a053ed9a09b/discussions) to add images that show off what youโve made with this LoRA.
|
csukuangfj/vits-piper-de_DE-glados_turret-medium | csukuangfj | 2025-06-16T07:52:50Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T07:08:40Z | # Introduction
models are from
https://huggingface.co/systemofapwne/piper-de-glados/tree/main/de/de_DE |
Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF | Triangle104 | 2025-06-16T07:52:45Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"esper",
"esper-3",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-4b",
"4b",
"reasoning",
"code",
"code-instruct",
"python",
"javascript",
"dev-ops",
"jenkins",
"terraform",
"scripting",
"powershell",
"azure",
"aws",
"gcp",
"cloud",
"problem-solving",
"architect",
"engineer",
"developer",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:sequelbox/Titanium2.1-DeepSeek-R1",
"dataset:sequelbox/Tachibana2-DeepSeek-R1",
"dataset:sequelbox/Raiden-DeepSeek-R1",
"base_model:ValiantLabs/Qwen3-4B-Esper3",
"base_model:quantized:ValiantLabs/Qwen3-4B-Esper3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T23:24:05Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- esper
- esper-3
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-4b
- 4b
- reasoning
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: ValiantLabs/Qwen3-4B-Esper3
datasets:
- sequelbox/Titanium2.1-DeepSeek-R1
- sequelbox/Tachibana2-DeepSeek-R1
- sequelbox/Raiden-DeepSeek-R1
license: apache-2.0
---
# Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF
This model was converted to GGUF format from [`ValiantLabs/Qwen3-4B-Esper3`](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) for more details on the model.
---
Esper 3 is a coding, architecture, and DevOps reasoning specialist built on Qwen 3.
- Finetuned on our DevOps and architecture reasoning and code reasoning data generated with Deepseek R1!
- Improved general and creative reasoning to supplement problem-solving and general chat performance.
- Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF --hf-file qwen3-4b-esper3-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF --hf-file qwen3-4b-esper3-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF --hf-file qwen3-4b-esper3-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_S-GGUF --hf-file qwen3-4b-esper3-q4_k_s.gguf -c 2048
```
|
csukuangfj/vits-piper-de_DE-glados_turret-low | csukuangfj | 2025-06-16T07:52:42Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T07:08:26Z | # Introduction
models are from
https://huggingface.co/systemofapwne/piper-de-glados/tree/main/de/de_DE |
johngreendr1/f1141c6c-f51c-4e86-9dbd-a4549c81238c | johngreendr1 | 2025-06-16T07:52:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"region:us"
] | null | 2025-06-16T03:05:12Z | ---
base_model: codellama/CodeLlama-7b-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
Triangle104/Qwen3-4B-Esper3-Q5_K_M-GGUF | Triangle104 | 2025-06-16T07:52:19Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"esper",
"esper-3",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-4b",
"4b",
"reasoning",
"code",
"code-instruct",
"python",
"javascript",
"dev-ops",
"jenkins",
"terraform",
"scripting",
"powershell",
"azure",
"aws",
"gcp",
"cloud",
"problem-solving",
"architect",
"engineer",
"developer",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:sequelbox/Titanium2.1-DeepSeek-R1",
"dataset:sequelbox/Tachibana2-DeepSeek-R1",
"dataset:sequelbox/Raiden-DeepSeek-R1",
"base_model:ValiantLabs/Qwen3-4B-Esper3",
"base_model:quantized:ValiantLabs/Qwen3-4B-Esper3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T23:46:53Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- esper
- esper-3
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-4b
- 4b
- reasoning
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: ValiantLabs/Qwen3-4B-Esper3
datasets:
- sequelbox/Titanium2.1-DeepSeek-R1
- sequelbox/Tachibana2-DeepSeek-R1
- sequelbox/Raiden-DeepSeek-R1
license: apache-2.0
---
# Triangle104/Qwen3-4B-Esper3-Q5_K_M-GGUF
This model was converted to GGUF format from [`ValiantLabs/Qwen3-4B-Esper3`](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_M-GGUF --hf-file qwen3-4b-esper3-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_M-GGUF --hf-file qwen3-4b-esper3-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_M-GGUF --hf-file qwen3-4b-esper3-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_M-GGUF --hf-file qwen3-4b-esper3-q5_k_m.gguf -c 2048
```
|
Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF | Triangle104 | 2025-06-16T07:50:42Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"esper",
"esper-3",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-4b",
"4b",
"reasoning",
"code",
"code-instruct",
"python",
"javascript",
"dev-ops",
"jenkins",
"terraform",
"scripting",
"powershell",
"azure",
"aws",
"gcp",
"cloud",
"problem-solving",
"architect",
"engineer",
"developer",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:sequelbox/Titanium2.1-DeepSeek-R1",
"dataset:sequelbox/Tachibana2-DeepSeek-R1",
"dataset:sequelbox/Raiden-DeepSeek-R1",
"base_model:ValiantLabs/Qwen3-4B-Esper3",
"base_model:quantized:ValiantLabs/Qwen3-4B-Esper3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T23:44:44Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- esper
- esper-3
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-4b
- 4b
- reasoning
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: ValiantLabs/Qwen3-4B-Esper3
datasets:
- sequelbox/Titanium2.1-DeepSeek-R1
- sequelbox/Tachibana2-DeepSeek-R1
- sequelbox/Raiden-DeepSeek-R1
license: apache-2.0
---
# Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF
This model was converted to GGUF format from [`ValiantLabs/Qwen3-4B-Esper3`](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF --hf-file qwen3-4b-esper3-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF --hf-file qwen3-4b-esper3-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF --hf-file qwen3-4b-esper3-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q5_K_S-GGUF --hf-file qwen3-4b-esper3-q5_k_s.gguf -c 2048
```
|
veddhanth/lora-trained-xl-stage-2-pretrained-enc-enhanced-sneaker | veddhanth | 2025-06-16T07:50:21Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-16T07:38:58Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks sneaker
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-pretrained-enc-enhanced-sneaker
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-pretrained-enc-enhanced-sneaker LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks sneaker to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-pretrained-enc-enhanced-sneaker/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.15_epoch1 | MinaMila | 2025-06-16T07:48:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:46:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF | Triangle104 | 2025-06-16T07:48:08Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"esper",
"esper-3",
"valiant",
"valiant-labs",
"qwen",
"qwen-3",
"qwen-3-4b",
"4b",
"reasoning",
"code",
"code-instruct",
"python",
"javascript",
"dev-ops",
"jenkins",
"terraform",
"scripting",
"powershell",
"azure",
"aws",
"gcp",
"cloud",
"problem-solving",
"architect",
"engineer",
"developer",
"creative",
"analytical",
"expert",
"rationality",
"conversational",
"chat",
"instruct",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:sequelbox/Titanium2.1-DeepSeek-R1",
"dataset:sequelbox/Tachibana2-DeepSeek-R1",
"dataset:sequelbox/Raiden-DeepSeek-R1",
"base_model:ValiantLabs/Qwen3-4B-Esper3",
"base_model:quantized:ValiantLabs/Qwen3-4B-Esper3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-30T23:35:07Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- esper
- esper-3
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-4b
- 4b
- reasoning
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- conversational
- chat
- instruct
- llama-cpp
- gguf-my-repo
base_model: ValiantLabs/Qwen3-4B-Esper3
datasets:
- sequelbox/Titanium2.1-DeepSeek-R1
- sequelbox/Tachibana2-DeepSeek-R1
- sequelbox/Raiden-DeepSeek-R1
license: apache-2.0
---
# Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF
This model was converted to GGUF format from [`ValiantLabs/Qwen3-4B-Esper3`](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF --hf-file qwen3-4b-esper3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF --hf-file qwen3-4b-esper3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF --hf-file qwen3-4b-esper3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-4B-Esper3-Q4_K_M-GGUF --hf-file qwen3-4b-esper3-q4_k_m.gguf -c 2048
```
|
momergul/babylm_interactive_gpt2_student | momergul | 2025-06-16T07:47:54Z | 323 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-08T21:55:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
suzushi/miso-diffusion-2.0 | suzushi | 2025-06-16T07:45:19Z | 0 | 4 | diffusers | [
"diffusers",
"text-to-image",
"en",
"base_model:stabilityai/stable-diffusion-3.5-medium",
"base_model:finetune:stabilityai/stable-diffusion-3.5-medium",
"region:us"
] | text-to-image | 2025-06-15T06:29:30Z | ---
language:
- en
license_name: stabilityai-ai-community
license_link: LICENSE.md
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
base_model:
- suzushi/miso-diffusion-m-1.0
- stabilityai/stable-diffusion-3.5-medium
---
<div style="display: flex; justify-content: center; gap: 20px; margin-bottom: 20px;">
<img src="demo1.png" width="400" />
<img src="demo2.png" width="400" />
</div>
# Anime SD3.5 medium Model
An attempt to fine tune sd3.5 medium
## Version History
| Version | Base Training | Aesthetic Training | Total Epochs |
|---------|--------------|-------------------|--------------|
| alpha | 250K images | 0 images | 1 |
| beta | 160K images | 0 images | 3 |
| 1.0 | 600k images | 0 images | 2 + (3 from beta) |
| 1.1 | 710k images | 0 images | 5 |
| 2.0 | 1.08M images | 0 images | 5 |
## Training Methodology
Training is done on gh200 with 96gb vram, now that prior training shows
decent results, I am slightly increasing learning rate.
Training setting: Adafactor with a batchsize of 40, lr_scheduler: cosine
SD3.5 Specific setting:
enable_scaled_pos_embed = true
pos_emb_random_crop_rate = 0.2
weighting_scheme = "flow"
learning_rate = 8e-6
learning_rate_te1 = 5e-6
learning_rate_te2 = 5e-6
Train Clip: true, Train t5xxl: false
## Support Me
At the moment training an epoch cost around 130 dollars. If you like my project please consider supporting me: https://ko-fi.com/suzushi2024
Lastly, huge thanks to meg who has been supporting this project, without him this project would not have been possible !
|
lyae/my-sentiment-classifier | lyae | 2025-06-16T07:43:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:42:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SidXXD/test-noise_upscaling-f_26 | SidXXD | 2025-06-16T07:42:41Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-06-15T17:37:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/test-noise_upscaling-f_26
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a sks person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.15_epoch2 | MinaMila | 2025-06-16T07:41:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:39:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mdhasnainali/ReaderLM-v2-Finetuned-For-Job-Post | mdhasnainali | 2025-06-16T07:40:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:38:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aieng-lab/gpt2-large_review-aspect | aieng-lab | 2025-06-16T07:40:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"en",
"base_model:openai-community/gpt2-large",
"base_model:finetune:openai-community/gpt2-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:40:02Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- gpt2-large
pipeline_tag: text-classification
---
# GPT-2 large for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [gpt2-large](https://huggingface.co/gpt2-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.25_epoch2 | MinaMila | 2025-06-16T07:40:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:38:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Abhikhm/llama3.2-1B-notechat | Abhikhm | 2025-06-16T07:40:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | null | 2025-06-16T07:40:15Z | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
japat123/qwen_jun16_1 | japat123 | 2025-06-16T07:40:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-16T07:39:00Z | ---
base_model: unsloth/qwen3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** japat123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Skyfallirk/vasnetsov_LoRa | Skyfallirk | 2025-06-16T07:40:02Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-16T07:39:58Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo collage in vasnetsov style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Skyfallirk/vasnetsov_LoRa
<Gallery />
## Model description
These are Skyfallirk/vasnetsov_LoRa LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo collage in vasnetsov style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Skyfallirk/vasnetsov_LoRa/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
aieng-lab/gpt2_review-aspect | aieng-lab | 2025-06-16T07:38:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"en",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:38:21Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- gpt2
pipeline_tag: text-classification
---
# GPT-2 small for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [gpt2](https://huggingface.co/gpt2)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
aieng-lab/ModernBERT-large_review-aspect | aieng-lab | 2025-06-16T07:37:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"en",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:37:32Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- answerdotai/ModernBERT-large
pipeline_tag: text-classification
---
# ModernBERT large for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
csukuangfj/vits-piper-de_DE-glados-medium | csukuangfj | 2025-06-16T07:36:48Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T07:07:38Z | # Introduction
models are from
https://huggingface.co/systemofapwne/piper-de-glados/tree/main/de/de_DE |
csukuangfj/vits-piper-de_DE-glados-low | csukuangfj | 2025-06-16T07:36:27Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-06-16T07:07:24Z | # Introduction
models are from
https://huggingface.co/systemofapwne/piper-de-glados/tree/main/de/de_DE |
aieng-lab/bert-large-cased_review-aspect | aieng-lab | 2025-06-16T07:34:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:34:44Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-large-cased
pipeline_tag: text-classification
---
# BERT large for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [bert-large-cased](https://huggingface.co/bert-large-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.15_epoch1 | MinaMila | 2025-06-16T07:34:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:33:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aieng-lab/bert-base-cased_review-aspect | aieng-lab | 2025-06-16T07:34:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-16T07:34:07Z | ---
library_name: transformers
license: mit
language:
- en
metrics:
- f1
- precision
- recall
base_model:
- bert-base-cased
pipeline_tag: text-classification
---
# BERT base for classifying API reviews
This model classifies API reviews in developer forums (e.g., Stack Overflow) as 'usability', 'others', 'onlysentiment', 'bug', 'performance', 'community', 'documentation', 'compatibility', 'legal', 'portability' or 'security'.
- **Developed by:** Fabian C. Peรฑa, Steffen Herbold
- **Finetuned from:** [bert-base-cased](https://huggingface.co/bert-base-cased)
- **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark)
- **Language:** English
- **License:** MIT
## Citation
```
@misc{pena2025benchmark,
author = {Fabian Peรฑa and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
```
|
Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF | Triangle104 | 2025-06-16T07:33:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:24:06Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF --hf-file q3-8b-kintsugi-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF --hf-file q3-8b-kintsugi-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF --hf-file q3-8b-kintsugi-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q8_0-GGUF --hf-file q3-8b-kintsugi-q8_0.gguf -c 2048
```
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.25_epoch1 | MinaMila | 2025-06-16T07:32:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:30:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xfjcoder/llama3.1-8b-erged-6bit | xfjcoder | 2025-06-16T07:31:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:31:17Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** xfjcoder
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.25_epoch2 | MinaMila | 2025-06-16T07:28:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:26:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FlameF0X/SnowflakeCore-G0-Release-3-1B | FlameF0X | 2025-06-16T07:26:20Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-15T19:54:04Z | ---
license: apache-2.0
---
# ๐ฌ SnowflakeCore-G0-Release-3 Architecture Size Report
## Summary
This document provides a detailed breakdown of the parameter count and structural design of the **SnowflakeCore-G0-Release-3** model. SnowflakeCore-G0-Release-3 is a custom decoder-only transformer model built from scratch, designed for autoregressive language modeling with rotary positional embeddings (RoPE).
---
## ๐ Model Architecture Overview
| Component | Value |
|------------------------|-----------------|
| Architecture Type | Decoder-only Transformer |
| Hidden Size (d_model) | 1536 |
| Number of Layers | 32 |
| Attention Heads | 16 |
| Feedforward Dim (d_ff) | 6144 |
| Max Sequence Length | 2048 |
| Positional Encoding | Rotary (RoPE) |
| Vocabulary Size | 50,000 (assumed)|
| Total Parameters | **โ 1.06 Billion** |
---
## ๐งฎ Parameter Count Breakdown
### 1. Embedding Layers
- **Token Embedding**: V ร d = 50,000 ร 1536 = 76.8M
- **Output Projection**: d ร V = 1536 ร 50,000 = 76.8M
**Total**:
```
P_embedding = 2 ยท 1536 ยท 50,000 = 153.6M
```
---
### 2. Transformer Blocks
Each of the 32 layers contains:
- **Multi-Head Attention (Q, K, V, Out)**:
4 ยท dยฒ = 4 ยท 1536ยฒ = 9.44M
- **Feedforward Network (MLP)**:
2 ยท d ยท d_ff = 2 ยท 1536 ยท 6144 = 18.87M
- **Total per Layer**:
```
9.44M + 18.87M = 28.31M
```
- **Total across 32 layers**:
```
32 ยท 28.31M = 905.97M
```
---
### 3. Positional Embedding
- **Type**: Rotary Positional Embeddings (RoPE)
- **Parameter Count**: **0** (non-learned, sinusoidal basis)
---
## ๐ Final Parameter Estimate
```
Total Parameters โ P_embedding + P_transformer = 153.6M + 905.97M = 1,059.6M
```
---
## ๐ง Training Regime (Contextual)
| Item | Value |
|-----------------------------|-------------------|
| Training Dataset Size | ~2 million rows |
| Max Tokens per Sequence | 2048 |
| Effective Batch Size | 32 ร 4 = 128 |
| Number of Epochs | 15 |
| Optimizer | AdamW |
| Learning Rate | 3 ร 10โปโด |
Approximate number of tokens:
```
2M ร avg_tokens_per_row โค 4B tokens
```
---
## ๐งพ Notes
- SnowflakeCore-G0-Release-3 exceeds the size of GPT-2 Large (~774M parameters).
- With RoPE and 32 layers, the model is well-positioned for long-range generalization.
- This parameter size is consistent with the compute-optimal design frontier for mid-scale language models.
---
## ๐ฆ Conclusion
**SnowflakeCore-G0-Release-3** is a rigorously engineered, 1.06B parameter language model with modern architectural choices (RoPE, deep stack, wide FFN) that position it as a strong open foundation model for further research, deployment, and extension. |
robotgeneralist/openpi-nomagic | robotgeneralist | 2025-06-16T07:24:23Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-04-14T18:14:58Z | ---
license: mit
---
# Nomagic Simple / Adversarial Box Model Checkpoints
This is a repo to store the most important checkpoints of the `openpi` model.
## Uploading checkpoints
Since the checkpoints are huge,
the fastest and most reliable way to upload them
is by using the `upload-large-folder` command from `huggingface-cli`.
To do so, you first have to log in with appropriate credentials
(you need a token with write permissions to the target repository):
```
huggingface-cli login
```
Next, use `upload-large-folder`.
For example, to upload the `checkpoints` directory to the remote repository, run:
```
huggingface-cli upload-large-folder robotgeneralist/openpi-nomagic-multibox checkpoints --repo-type=model
```
Note that there is no way to specify a target path where the data will be stored on the remote.
The contents of the directory will be placed under the root directory.
So, for example, if your local folder is organized like the following:
```
checkpoints
--some-dir
--file1
--file2
```
after uploading to the remote, you will have:
```
some-dir
--file1
--file2
```
Luckily, you can still upload additional files later on.
For example, if after the first upload you try to upload:
```
checkpoints
--some-dir
--file3
--file4
```
the remote will become:
```
some-dir
--file1
--file2
--file3
--file4
```
Hence, even though slightly inconvenient,
this seems to be the best method for uploading big checkpoints,
because of its efficiency and robustness.
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.25_epoch1 | MinaMila | 2025-06-16T07:21:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:19:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NAMAA-Space/zarra | NAMAA-Space | 2025-06-16T07:20:05Z | 54 | 2 | model2vec | [
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"sentence-transformers",
"sentence-similarity",
"ar",
"dataset:allenai/c4",
"base_model:jinaai/jina-embeddings-v3",
"base_model:finetune:jinaai/jina-embeddings-v3",
"license:mit",
"region:us"
] | sentence-similarity | 2025-05-27T10:22:20Z | ---
library_name: model2vec
license: mit
model_name: Abdelkareem/zarra
tags:
- embeddings
- static-embeddings
- sentence-transformers
datasets:
- allenai/c4
language:
- ar
base_model:
- jinaai/jina-embeddings-v3
pipeline_tag: sentence-similarity
---
# Zarra: Arabic Static Embedding Model

**Zarra** is a static embedding model built using the Model2Vec distillation framework.
It is a distilled version of a Sentence Transformer, specifically optimized for the Arabic language.
Unlike traditional transformer-based models, Zarra produces static embeddings, enabling ultra-fast inference on both CPU and GPUโmaking it ideal for resource-constrained environments or real-time applications.
## Why Zarra?
โก Exceptional Speed: Delivers embeddings up to 500x faster than sentence transformers.
๐ง Compact & Efficient: Up to 50x smaller in size, allowing easy deployment on edge devices.
๐งฐ Versatile: Well-suited for search, clustering, classification, deduplication, and more.
๐ Arabic-First: Specifically trained on high-quality Arabic data, ensuring relevance and performance across a range of Arabic NLP tasks.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f7a71dd993507cfcbe587/3JEnPfgF2BfbN5H81K0XD.png" alt="Speed vs Performance Chart" width="700"/>
</p>
## About Model2Vec
The Model2Vec distillation technique transfers knowledge from large transformer models into lightweight static embedding spaces, preserving semantic quality while dramatically improving speed and efficiency.
Zarra represents the best of both worlds: the semantic power of transformers and the speed and simplicity of static vectors.
## Installation
Install model2vec using pip:
```
pip install model2vec
```
## Usage
### Using Model2Vec
The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models.
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("NAMAA-Space/zarra")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
### Using Sentence Transformers
You can also use the [Sentence Transformers library](https://github.com/UKPLab/sentence-transformers) to load and use the model:
```python
from sentence_transformers import SentenceTransformer
# Load a pretrained Sentence Transformer model
model = SentenceTransformer("NAMAA-Space/zarra")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
## How it Works
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using [SIF weighting](https://openreview.net/pdf?id=SyK00v5xx). During inference, we simply take the mean of all token embeddings occurring in a sentence.
## Benchmark on Arabic
## Speed
| Model | Speed (sentences/second) | Device |
|---------------------------------------|--------------------------|--------|
| zarra | 26893.63 | cpu |
| bojji | 27478.15 | cpu |
| potion-multilingual-128M | 27145.31 | cpu |
| paraphrase-multilingual-MiniLM-L12-v2 | 2363.24 | cuda |
| silma_ai_embedding_sts_v0.1 | 627.13 | cuda |
| muffakir_embedding | 621.77 | cuda |
| get_multilingual_base | 895.41 | cuda |
| arabic_retrieval_v1.0 | 618.56 | cuda |
| arabic_triplet_matryoshka_v2 | 610.64 | cuda |
- Zarra and Bojji excel in speed, achieving 26893.63 and 27478.15 sentences per second on CPU, respectively, far surpassing CUDA-based models like arabic_triplet_matryoshka_v2 (610.64).
- Top Performer: Bojji is the fastest model, slightly ahead of Zarra and potion-multilingual-128M (27145.31), highlighting the efficiency of Model2Vec-based models on CPU.
- Key Observation: The high speed of Zarra and Bojji on CPU makes them ideal for resource-constrained environments, offering significant advantages over CUDA-dependent models.
## Size of the Model
| Model | Parameters (M) | Size (MB) | Relative to Largest (%) | Less than Largest (x) |
|----------------------------------|----------------|-----------|-------------------------|-----------------------|
| zarra | 64.00 | 244.14 | 41.92 | 2.39 |
| bojji | 124.88 | 476.40 | 81.79 | 1.22 |
| potion-multilingual-128M | 128.09 | 488.63 | 83.89 | 1.19 |
| paraphrase-multilingual-MiniLM-โฆ | 117.65 | 448.82 | 77.06 | 1.30 |
| silma_ai_embedding_sts_v0.1 | 135.19 | 515.72 | 88.54 | 1.13 |
| muffakir_embedding | 135.19 | 515.72 | 88.54 | 1.13 |
| arabic_retrieval_v1.0 | 135.19 | 515.73 | 88.54 | 1.13 |
| arabic_triplet_matryoshka_v2 | 135.19 | 515.72 | 88.54 | 1.13 |
| get_multilingual_base | 305.37 | 582.45 | 100.00 | 1.00 |
- Zarra is the smallest model, with only 64 million parameters and 244.14 MB in size, making it 2.39 times smaller than the largest model (get_multilingual_base).
- Bojji is slightly larger at 124.88 million parameters and 476.40 MB, but still significantly smaller than most other models.
- Top Performer: Zarra leads in compactness, offering the smallest footprint, which is critical for deployment on resource-limited devices.
- Key Observation: The compact size of Zarra and Bojji aligns with their design goal of efficiency, making them highly suitable for edge computing and real-time applications.
| Model | Avg | MIRAC | MLQAR | Massi | Multi | STS17 | STS22 | XNLI_ |
|---------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|
| arabic_triplet_matryoshka_v2 | 0.6610 | 0.6262 | 0.5093 | 0.5577 | 0.5868 | 0.8531 | 0.6396 | 0.8542 |
| muffakir_embedding | 0.6494 | 0.6424 | 0.5267 | 0.5462 | 0.5943 | 0.8485 | 0.6291 | 0.7583 |
| arabic_retrieval_v1.0 | 0.6473 | 0.6159 | 0.5674 | 0.5832 | 0.5993 | 0.8002 | 0.6254 | 0.7393 |
| gate_arabert-v1 | 0.6444 | 0.5774 | 0.4808 | 0.5345 | 0.5847 | 0.8278 | 0.6310 | 0.8746 |
| get_multilingual_base | 0.6440 | 0.7177 | 0.5698 | 0.5071 | 0.5521 | 0.7881 | 0.6145 | 0.7584 |
| arabic_sts_matryoshka | 0.6413 | 0.5828 | 0.4840 | 0.5457 | 0.5494 | 0.8290 | 0.6242 | 0.8740 |
| silma_ai_embedding_sts_v0.1 | 0.6138 | 0.3799 | 0.5011 | 0.5600 | 0.5749 | 0.8559 | 0.6122 | 0.8125 |
| Arabic-MiniLM-L12-v2-all-nli-triplet | 0.5431 | 0.2240 | 0.3612 | 0.4775 | 0.5698 | 0.8111 | 0.5540 | 0.8043 |
| paraphrase-multilingual-MiniLM-L12-v2 | 0.5208 | 0.2191 | 0.3496 | 0.4515 | 0.5573 | 0.7916 | 0.4908 | 0.7859 |
| bojji | 0.5177 | 0.2941 | 0.3989 | 0.4667 | 0.5433 | 0.7233 | 0.5880 | 0.6094 |
| zarra | 0.4822 | 0.2295 | 0.3473 | 0.4119 | 0.5237 | 0.6469 | 0.6218 | 0.5942 |
| potion-multilingual-128M | 0.4699 | 0.1658 | 0.3150 | 0.4285 | 0.5338 | 0.6511 | 0.5951 | 0.5999 |
| all_minilm_l6_v2 | 0.2843 | 0.0005 | 0.0064 | 0.1905 | 0.4934 | 0.5089 | 0.2518 | 0.5384 |
### Sorted by STS17_main (Score)
| Model Name | STS17_main |
|---------------------------------------|------------|
| silma_ai_embedding_sts_v0.1 | 0.856 |
| arabic_triplet_matryoshka_v2 | 0.853 |
| muffakir_embedding | 0.849 |
| arabic_sts_matryoshka | 0.829 |
| gate_arabert-v1 | 0.828 |
| Arabic-MiniLM-L12-v2-all-nli-triplet | 0.811 |
| arabic_retrieval_v1.0 | 0.800 |
| paraphrase-multilingual-MiniLM-L12-v2 | 0.792 |
| get_multilingual_base | 0.788 |
| bojji | 0.723 |
| potion-multilingual-128M | 0.651 |
| zarra | 0.647 |
| all_minilm_l6_v2 | 0.509 |
### Sorted by STS22.v2_main (Score)
| Model Name | STS22.v2_main |
|---------------------------------------|---------------|
| arabic_triplet_matryoshka_v2 | 0.640 |
| gate_arabert-v1 | 0.631 |
| muffakir_embedding | 0.629 |
| arabic_retrieval_v1.0 | 0.625 |
| arabic_sts_matryoshka | 0.624 |
| zarra | 0.622 |
| get_multilingual_base | 0.615 |
| silma_ai_embedding_sts_v0.1 | 0.612 |
| potion-multilingual-128M | 0.595 |
| bojji | 0.588 |
| Arabic-MiniLM-L12-v2-all-nli-triplet | 0.554 |
| paraphrase-multilingual-MiniLM-L12-v2 | 0.491 |
| all_minilm_l6_v2 | 0.252 |
## Additional Resources
- [Zarra & Bojji Blog](https://kareemai.com/blog/posts/minishlab/blog_zaraah.html)
- [NAMAA Collection](https://huggingface.co/collections/NAMAA-Space/zaraah-683f1f8a1eec1ec8f2badee5)
- [MinishLab](https://minishlab.github.io/)
- [Model2Vec Repo](https://github.com/MinishLab/model2vec) |
allura-org/Q3-8B-Kintsugi | allura-org | 2025-06-16T07:16:59Z | 8 | 3 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-13T21:25:09Z | ---
license: apache-2.0
base_model: Qwen/Qwen3-8B-Base
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Q3-8B-Kintsugi

<small><i>get it? because kintsugi sounds like kitsune? hahaha-</i></small>
# Overview
***Q3-8B-Kintsugi*** is a roleplaying model finetuned from [Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
# Quantizations
EXL3:
- [Official EXL3 quant repo](https://huggingface.co/allura-quants/allura-org_Q3-8B-Kintsugi-EXL3)
GGUF:
- [Official static GGUF quants](https://huggingface.co/allura-quants/allura-org_Q3-8B-Kintsugi-GGUF)
MLX:
- [8, 6, and 4bpw MLX-formrt quants by soundTeam](https://huggingface.co/collections/allura-quants/q3-8b-kintsugi-mlx-684fc48444f1214749f538c4)
# Usage
- Format is plain-old ChatML (please note that, unlike regular Qwen 3, you do *not* need to prefill empty think tags for it not to reason -- see below).
- Settings used by testers varied, but we generally stayed around 0.9 temperature and 0.1 min p. Do *not* use repetition penalties (DRY included). They break it.
- Any system prompt can likely be used, but I used the Shingame system prompt (link will be added later i promise)
- The official instruction following version of Qwen3-8B was not used as a base. Instruction-following is trained in post-hoc, and "thinking" traces were not included. __As a result of this, "thinking" will not function.__
# Training Process
1. The [base model](https://huggingface.co/Qwen/Qwen3-8B-Base) first went through a supervised finetune on a corpus of instruction following data, roleplay conversations, and human writing based on the [Ink](https://huggingface.co/collections/allura-org/ink-6772fd1442308781594bbabb)/[Bigger Body](https://huggingface.co/collections/allura-org/bigger-body-67b277af0861cec33b54745d)/[Remnant](https://huggingface.co/collections/allura-org/remnant-6817c2113bbb2aed501513d0) lineage.
2. Finally, a KTO reinforcement learning phase steered the model away from the very purple prose the initial merge had, and improved its logical+spatial reasoning and sense of overall "intelligence".
Both stages here are very similar to [Q3-30B-A3B-Designant](https://huggingface.co/allura-org/Q3-30B-A3B-Designant), which went through a very similar process with the same data.
# Credits
- Fizz - Training, Data Wrangling
- Toaster, Mango, Bot, probably others I forgot ;-; - Testing
- inflatebot - original Designant model card that this one was yoinked from
- Artus - Funding
- Alibaba - Making the original model
- Axolotl, Unsloth, Huggingface - Making the frameworks used to train this model (Axolotl was used for the SFT process, and Unsloth+TRL was used for the KTO process)
- All quanters, inside and outside the org, specifically Artus, Lyra, and soundTeam/Heni
We would like to thank the Allura community on Discord, especially Curse, Heni, Artus and Mawnipulator, for their companionship and moral support. You all mean the world to us <3 |
Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF | Triangle104 | 2025-06-16T07:15:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T07:12:01Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -c 2048
```
|
Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF | Triangle104 | 2025-06-16T07:14:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"axolotl",
"unsloth",
"roleplay",
"conversational",
"llama-cpp",
"gguf-my-repo",
"dataset:PygmalionAI/PIPPA",
"dataset:Alfitaria/nemotron-ultra-reasoning-synthkink",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:FreedomIntelligence/Medical-R1-Distill-Data",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:kalomaze/Opus_Instruct_25k",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:ai2-adapt-dev/flan_v2_converted",
"dataset:grimulkan/theory-of-mind",
"dataset:grimulkan/physical-reasoning",
"dataset:nvidia/HelpSteer3",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"dataset:nbeerbower/Purpura-DPO",
"dataset:antiven0m/physical-reasoning-dpo",
"dataset:allenai/tulu-3-IF-augmented-on-policy-70b",
"dataset:NobodyExistsOnTheInternet/system-message-DPO",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:quantized:allura-org/Q3-8B-Kintsugi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T06:39:48Z | ---
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
- mergekit
- axolotl
- unsloth
- roleplay
- conversational
- llama-cpp
- gguf-my-repo
datasets:
- PygmalionAI/PIPPA
- Alfitaria/nemotron-ultra-reasoning-synthkink
- PocketDoc/Dans-Prosemaxx-Gutenberg
- FreedomIntelligence/Medical-R1-Distill-Data
- cognitivecomputations/SystemChat-2.0
- allenai/tulu-3-sft-personas-instruction-following
- kalomaze/Opus_Instruct_25k
- simplescaling/s1K-claude-3-7-sonnet
- ai2-adapt-dev/flan_v2_converted
- grimulkan/theory-of-mind
- grimulkan/physical-reasoning
- nvidia/HelpSteer3
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- nbeerbower/Purpura-DPO
- antiven0m/physical-reasoning-dpo
- allenai/tulu-3-IF-augmented-on-policy-70b
- NobodyExistsOnTheInternet/system-message-DPO
---
# Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
---
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF --hf-file q3-8b-kintsugi-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF --hf-file q3-8b-kintsugi-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF --hf-file q3-8b-kintsugi-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_S-GGUF --hf-file q3-8b-kintsugi-q4_k_s.gguf -c 2048
```
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.5_epoch2 | MinaMila | 2025-06-16T07:14:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:12:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fsa32442/cl-tohoku-bert-base-japanese-v3-ner-wikipedia-dataset | fsa32442 | 2025-06-16T07:13:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-06-16T07:12:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AndyBlocker/ViStream | AndyBlocker | 2025-06-16T07:10:28Z | 0 | 0 | pytorch | [
"pytorch",
"computer-vision",
"object-tracking",
"spiking-neural-networks",
"visual-streaming-perception",
"energy-efficient",
"cvpr-2025",
"object-detection",
"license:cc-by-4.0",
"region:us"
] | object-detection | 2025-06-16T05:52:58Z | ---
license: cc-by-4.0
library_name: pytorch
tags:
- computer-vision
- object-tracking
- spiking-neural-networks
- visual-streaming-perception
- energy-efficient
- cvpr-2025
pipeline_tag: object-detection
---
# ViStream: Law-of-Charge-Conservation Inspired Spiking Neural Network for Visual Streaming Perception
**ViStream** is a novel energy-efficient framework for Visual Streaming Perception (VSP) that leverages Spiking Neural Networks (SNNs) with Law of Charge Conservation (LoCC) properties.
## Model Details
### Model Description
- **Developed by:** Kang You, Ziling Wei, Jing Yan, Boning Zhang, Qinghai Guo, Yaoyu Zhang, Zhezhi He
- **Model type:** Spiking Neural Network for Visual Streaming Perception
- **Language(s):** PyTorch implementation
- **License:** CC-BY-4.0
- **Paper:** [CVPR 2025](https://openaccess.thecvf.com/content/CVPR2025/papers/You_VISTREAM_Improving_Computation_Efficiency_of_Visual_Streaming_Perception_via_Law-of-Charge-Conservation_CVPR_2025_paper.pdf)
- **Repository:** [GitHub](https://github.com/Intelligent-Computing-Research-Group/ViStream)
### Model Architecture
ViStream introduces two key innovations:
1. **Law of Charge Conservation (LoCC)** property in ST-BIF neurons
2. **Differential Encoding (DiffEncode)** scheme for temporal optimization
The framework achieves significant computational reduction while maintaining accuracy equivalent to ANN counterparts.
## Uses
### Direct Use
ViStream can be directly used for:
- **Multiple Object Tracking (MOT)**
- **Single Object Tracking (SOT)**
- **Video Object Segmentation (VOS)**
- **Multiple Object Tracking and Segmentation (MOTS)**
- **Pose Tracking**
### Downstream Use
The model can be fine-tuned for various visual streaming perception tasks in:
- Autonomous driving
- UAV navigation
- AR/VR applications
- Real-time surveillance
## Bias, Risks, and Limitations
### Limitations
- Requires specific hardware optimization for maximum energy benefits
- Performance may vary with different frame rates
- Limited to visual perception tasks
### Recommendations
- Test thoroughly on target hardware before deployment
- Consider computational constraints of edge devices
- Validate performance on domain-specific datasets
## How to Get Started with the Model
```python
from huggingface_hub import hf_hub_download
import torch
# Download the checkpoint
checkpoint_path = hf_hub_download(
repo_id="AndyBlocker/ViStream",
filename="checkpoint-90.pth"
)
# Load the model (requires ViStream implementation)
checkpoint = torch.load(checkpoint_path, map_location='cpu')
```
For complete usage examples, see the [GitHub repository](https://github.com/Intelligent-Computing-Research-Group/ViStream).
## Training Details
### Training Data
The model was trained on multiple datasets for various visual streaming perception tasks including object tracking, video object segmentation, and pose tracking.
### Training Procedure
**Training Details:**
- Framework: PyTorch
- Optimization: Energy-efficient SNN training with Law of Charge Conservation
- Architecture: ResNet-based backbone with spike quantization layers
## Evaluation
The model demonstrates competitive performance across multiple visual streaming perception tasks while achieving significant energy efficiency improvements compared to traditional ANN-based approaches. Detailed evaluation results are available in the [CVPR 2025 paper](https://openaccess.thecvf.com/content/CVPR2025/papers/You_VISTREAM_Improving_Computation_Efficiency_of_Visual_Streaming_Perception_via_Law-of-Charge-Conservation_CVPR_2025_paper.pdf).
## Model Card Authors
Kang You, Ziling Wei, Jing Yan, Boning Zhang, Qinghai Guo, Yaoyu Zhang, Zhezhi He
## Model Card Contact
For questions about this model, please open an issue in the [GitHub repository](https://github.com/Intelligent-Computing-Research-Group/ViStream).
## Citation
```bibtex
@inproceedings{you2025vistream,
title={VISTREAM: Improving Computation Efficiency of Visual Streaming Perception via Law-of-Charge-Conservation Inspired Spiking Neural Network},
author={You, Kang and Wei, Ziling and Yan, Jing and Zhang, Boning and Guo, Qinghai and Zhang, Yaoyu and He, Zhezhi},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={8796--8805},
year={2025}
}
``` |
phospho-app/jmota27-ACT_BBOX-boats_datasets-u50na | phospho-app | 2025-06-16T07:10:27Z | 0 | 0 | null | [
"phosphobot",
"act",
"region:us"
] | null | 2025-06-16T07:05:08Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'black boat' was detected in 0 episodes in secondary_0 camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/jmota27/boats_datasets/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [jmota27/boats_datasets](https://huggingface.co/datasets/jmota27/boats_datasets)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.75_epoch2 | MinaMila | 2025-06-16T07:08:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:06:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NICFRU/nc_ner_bert_model_german_alle_ner_tags | NICFRU | 2025-06-16T07:08:17Z | 0 | 0 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2025-06-16T07:05:43Z | # nc_ner_bert_model
This model is a fine-tuned version of bert-base-german-cased on the german-ler dataset.
It achieves the following results on the evaluation set:
Loss: 0.011337515898048878
F1: 0.9723312768741821
Precision: 0.9669083472764455
Recall: 0.9778153788306072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- num_epochs: 3
### Training results
| loss | epoch | step | eval_loss | eval_f1 | eval_precision | eval_recall |
| --- | --- | --- | --- | --- | --- | --- |
| 0.7592 | 0.029958058717795086 | 50 | nan | nan | nan | nan |
| 0.1908 | 0.05991611743559017 | 100 | nan | nan | nan | nan |
| 0.1288 | 0.08987417615338526 | 150 | nan | nan | nan | nan |
| 0.0877 | 0.11983223487118035 | 200 | nan | nan | nan | nan |
| nan | 0.11983223487118035 | 200 | 0.07891597598791122 | 0.71008601947254 | 0.6772448611611973 | 0.7462745877210412 |
| 0.0782 | 0.14979029358897544 | 250 | nan | nan | nan | nan |
| 0.071 | 0.17974835230677053 | 300 | nan | nan | nan | nan |
| 0.0605 | 0.2097064110245656 | 350 | nan | nan | nan | nan |
| 0.0491 | 0.2396644697423607 | 400 | nan | nan | nan | nan |
| nan | 0.2396644697423607 | 400 | 0.05076289176940918 | 0.799463190184049 | 0.7723652528245971 | 0.828531690840453 |
| 0.0471 | 0.2696225284601558 | 450 | nan | nan | nan | nan |
| 0.0463 | 0.2995805871779509 | 500 | nan | nan | nan | nan |
| 0.0544 | 0.32953864589574594 | 550 | nan | nan | nan | nan |
| 0.0355 | 0.35949670461354105 | 600 | nan | nan | nan | nan |
| nan | 0.35949670461354105 | 600 | 0.03874693065881729 | 0.8478779208392943 | 0.8152971386647102 | 0.8831710709318498 |
| 0.0479 | 0.3894547633313361 | 650 | nan | nan | nan | nan |
| 0.0343 | 0.4194128220491312 | 700 | nan | nan | nan | nan |
| 0.033 | 0.44937088076692633 | 750 | nan | nan | nan | nan |
| 0.0367 | 0.4793289394847214 | 800 | nan | nan | nan | nan |
| nan | 0.4793289394847214 | 800 | 0.03735911101102829 | 0.8572247706422018 | 0.8258147670778863 | 0.891118617126962 |
| 0.0343 | 0.5092869982025164 | 850 | nan | nan | nan | nan |
| 0.0361 | 0.5392450569203115 | 900 | nan | nan | nan | nan |
| 0.0408 | 0.5692031156381067 | 950 | nan | nan | nan | nan |
| 0.0315 | 0.5991611743559018 | 1000 | nan | nan | nan | nan |
| nan | 0.5991611743559018 | 1000 | 0.030343208461999893 | 0.8759884281581485 | 0.8510399100618324 | 0.902443870454997 |
| 0.036 | 0.6291192330736968 | 1050 | nan | nan | nan | nan |
| 0.0225 | 0.6590772917914919 | 1100 | nan | nan | nan | nan |
| 0.0275 | 0.689035350509287 | 1150 | nan | nan | nan | nan |
| 0.0296 | 0.7189934092270821 | 1200 | nan | nan | nan | nan |
| nan | 0.7189934092270821 | 1200 | 0.03183047100901604 | 0.8875096974398758 | 0.866830839174086 | 0.9091992847208424 |
| 0.04 | 0.7489514679448772 | 1250 | nan | nan | nan | nan |
| 0.0281 | 0.7789095266626722 | 1300 | nan | nan | nan | nan |
| 0.0433 | 0.8088675853804673 | 1350 | nan | nan | nan | nan |
| 0.0299 | 0.8388256440982624 | 1400 | nan | nan | nan | nan |
| nan | 0.8388256440982624 | 1400 | 0.027965761721134186 | 0.8886951467596629 | 0.8671077504725898 | 0.9113848599244984 |
| 0.0381 | 0.8687837028160575 | 1450 | nan | nan | nan | nan |
| 0.0298 | 0.8987417615338527 | 1500 | nan | nan | nan | nan |
| 0.0278 | 0.9286998202516477 | 1550 | nan | nan | nan | nan |
| 0.0261 | 0.9586578789694428 | 1600 | nan | nan | nan | nan |
| nan | 0.9586578789694428 | 1600 | 0.028123166412115097 | 0.9013783731314309 | 0.8811918770165117 | 0.9225114245976554 |
| 0.0262 | 0.9886159376872379 | 1650 | nan | nan | nan | nan |
| 0.0178 | 1.0185739964050329 | 1700 | nan | nan | nan | nan |
| 0.0132 | 1.048532055122828 | 1750 | nan | nan | nan | nan |
| 0.0157 | 1.078490113840623 | 1800 | nan | nan | nan | nan |
| nan | 1.078490113840623 | 1800 | 0.028561240062117577 | 0.9077876791941109 | 0.8856548856548857 | 0.9310550367574012 |
| 0.0132 | 1.1084481725584183 | 1850 | nan | nan | nan | nan |
| 0.0155 | 1.1384062312762133 | 1900 | nan | nan | nan | nan |
| 0.0129 | 1.1683642899940083 | 1950 | nan | nan | nan | nan |
| 0.0148 | 1.1983223487118035 | 2000 | nan | nan | nan | nan |
| nan | 1.1983223487118035 | 2000 | 0.026516983285546303 | 0.9101913315111284 | 0.8946459412780656 | 0.9262865090403338 |
| 0.0106 | 1.2282804074295985 | 2050 | nan | nan | nan | nan |
| 0.0108 | 1.2582384661473935 | 2100 | nan | nan | nan | nan |
| 0.0165 | 1.2881965248651888 | 2150 | nan | nan | nan | nan |
| 0.0167 | 1.3181545835829838 | 2200 | nan | nan | nan | nan |
| nan | 1.3181545835829838 | 2200 | 0.02606791816651821 | 0.9090380703283929 | 0.8869565217391304 | 0.932247168686668 |
| 0.0151 | 1.348112642300779 | 2250 | nan | nan | nan | nan |
| 0.0169 | 1.378070701018574 | 2300 | nan | nan | nan | nan |
| 0.0165 | 1.4080287597363692 | 2350 | nan | nan | nan | nan |
| 0.0127 | 1.4379868184541642 | 2400 | nan | nan | nan | nan |
| nan | 1.4379868184541642 | 2400 | 0.02857920527458191 | 0.9142130490071408 | 0.9003853564547206 | 0.9284720842439896 |
| 0.0131 | 1.4679448771719592 | 2450 | nan | nan | nan | nan |
| 0.0147 | 1.4979029358897544 | 2500 | nan | nan | nan | nan |
| 0.0131 | 1.5278609946075494 | 2550 | nan | nan | nan | nan |
| 0.0116 | 1.5578190533253444 | 2600 | nan | nan | nan | nan |
| nan | 1.5578190533253444 | 2600 | 0.0249184537678957 | 0.9242824958370065 | 0.9115146831530139 | 0.9374130737134909 |
| 0.0166 | 1.5877771120431396 | 2650 | nan | nan | nan | nan |
| 0.0145 | 1.6177351707609346 | 2700 | nan | nan | nan | nan |
| 0.0152 | 1.6476932294787296 | 2750 | nan | nan | nan | nan |
| 0.0119 | 1.6776512881965249 | 2800 | nan | nan | nan | nan |
| nan | 1.6776512881965249 | 2800 | 0.024047361686825752 | 0.9198402649264634 | 0.9021780664883454 | 0.9382078283330022 |
| 0.0157 | 1.70760934691432 | 2850 | nan | nan | nan | nan |
| 0.0139 | 1.737567405632115 | 2900 | nan | nan | nan | nan |
| 0.0161 | 1.76752546434991 | 2950 | nan | nan | nan | nan |
| 0.0118 | 1.7974835230677053 | 3000 | nan | nan | nan | nan |
| nan | 1.7974835230677053 | 3000 | 0.02312026545405388 | 0.9283470749901845 | 0.9173617846750728 | 0.9395986489171468 |
| 0.0097 | 1.8274415817855003 | 3050 | nan | nan | nan | nan |
| 0.016 | 1.8573996405032953 | 3100 | nan | nan | nan | nan |
| 0.013 | 1.8873576992210905 | 3150 | nan | nan | nan | nan |
| 0.0133 | 1.9173157579388855 | 3200 | nan | nan | nan | nan |
| nan | 1.9173157579388855 | 3200 | 0.023281875997781754 | 0.9217849819353578 | 0.9062980030721967 | 0.9378104510232466 |
| 0.0175 | 1.9472738166566805 | 3250 | nan | nan | nan | nan |
| 0.0154 | 1.9772318753744758 | 3300 | nan | nan | nan | nan |
| 0.0096 | 2.007189934092271 | 3350 | nan | nan | nan | nan |
| 0.0057 | 2.0371479928100658 | 3400 | nan | nan | nan | nan |
| nan | 2.0371479928100658 | 3400 | 0.023734014481306076 | 0.9255650818394388 | 0.9080481743452494 | 0.9437711106695807 |
| 0.0083 | 2.067106051527861 | 3450 | nan | nan | nan | nan |
| 0.0081 | 2.097064110245656 | 3500 | nan | nan | nan | nan |
| 0.0075 | 2.127022168963451 | 3550 | nan | nan | nan | nan |
| 0.0057 | 2.156980227681246 | 3600 | nan | nan | nan | nan |
| nan | 2.156980227681246 | 3600 | 0.025288647040724754 | 0.9282016215688189 | 0.9129515757109915 | 0.9439697993244586 |
| 0.0049 | 2.1869382863990414 | 3650 | nan | nan | nan | nan |
| 0.0076 | 2.2168963451168366 | 3700 | nan | nan | nan | nan |
| 0.0067 | 2.2468544038346314 | 3750 | nan | nan | nan | nan |
| 0.009 | 2.2768124625524266 | 3800 | nan | nan | nan | nan |
| nan | 2.2768124625524266 | 3800 | 0.025000886991620064 | 0.926643935703848 | 0.9090214067278287 | 0.9449632425988476 |
| 0.0067 | 2.306770521270222 | 3850 | nan | nan | nan | nan |
| 0.0081 | 2.3367285799880166 | 3900 | nan | nan | nan | nan |
| 0.0073 | 2.366686638705812 | 3950 | nan | nan | nan | nan |
| 0.0059 | 2.396644697423607 | 4000 | nan | nan | nan | nan |
| nan | 2.396644697423607 | 4000 | 0.024387583136558533 | 0.9357906087638466 | 0.9235681114551083 | 0.9483409497317703 |
| 0.0073 | 2.426602756141402 | 4050 | nan | nan | nan | nan |
| 0.0075 | 2.456560814859197 | 4100 | nan | nan | nan | nan |
| 0.0052 | 2.4865188735769923 | 4150 | nan | nan | nan | nan |
| 0.006 | 2.516476932294787 | 4200 | nan | nan | nan | nan |
| nan | 2.516476932294787 | 4200 | 0.024684011936187744 | 0.9356576241308392 | 0.9225569718037853 | 0.9491357043512816 |
| 0.0088 | 2.5464349910125823 | 4250 | nan | nan | nan | nan |
| 0.0122 | 2.5763930497303775 | 4300 | nan | nan | nan | nan |
| 0.008 | 2.6063511084481723 | 4350 | nan | nan | nan | nan |
| 0.0072 | 2.6363091671659675 | 4400 | nan | nan | nan | nan |
| nan | 2.6363091671659675 | 4400 | 0.02404804341495037 | 0.9307684796406601 | 0.9151305683563749 | 0.9469501291476257 |
| 0.0075 | 2.6662672258837627 | 4450 | nan | nan | nan | nan |
| 0.0072 | 2.696225284601558 | 4500 | nan | nan | nan | nan |
| 0.0061 | 2.7261833433193527 | 4550 | nan | nan | nan | nan |
| 0.0057 | 2.756141402037148 | 4600 | nan | nan | nan | nan |
| nan | 2.756141402037148 | 4600 | 0.026269957423210144 | 0.9321470473210794 | 0.9176130895091434 | 0.9471488178025035 |
| 0.0038 | 2.786099460754943 | 4650 | nan | nan | nan | nan |
| 0.008 | 2.8160575194727384 | 4700 | nan | nan | nan | nan |
| 0.0073 | 2.846015578190533 | 4750 | nan | nan | nan | nan |
| 0.0065 | 2.8759736369083284 | 4800 | nan | nan | nan | nan |
| nan | 2.8759736369083284 | 4800 | 0.02427930384874344 | 0.9350088356567839 | 0.9241218707549 | 0.9461553745281145 |
| 0.0056 | 2.9059316956261236 | 4850 | nan | nan | nan | nan |
| 0.0074 | 2.9358897543439184 | 4900 | nan | nan | nan | nan |
| 0.0059 | 2.9658478130617136 | 4950 | nan | nan | nan | nan |
| 0.006 | 2.995805871779509 | 5000 | nan | nan | nan | nan |
| nan | 2.995805871779509 | 5000 | 0.025616737082600594 | 0.9314559499364427 | 0.9170196380438969 | 0.9463540631829922 |
| 0.004 | 3.0257639304973036 | 5050 | nan | nan | nan | nan |
| 0.0047 | 3.055721989215099 | 5100 | nan | nan | nan | nan |
| 0.0026 | 3.085680047932894 | 5150 | nan | nan | nan | nan |
| 0.0047 | 3.115638106650689 | 5200 | nan | nan | nan | nan |
| nan | 3.115638106650689 | 5200 | 0.02595394104719162 | 0.935866053069617 | 0.9225868725868726 | 0.9495330816610371 |
| 0.0036 | 3.145596165368484 | 5250 | nan | nan | nan | nan |
| 0.0035 | 3.1755542240862793 | 5300 | nan | nan | nan | nan |
| 0.0034 | 3.205512282804074 | 5350 | nan | nan | nan | nan |
| 0.0025 | 3.2354703415218693 | 5400 | nan | nan | nan | nan |
| nan | 3.2354703415218693 | 5400 | 0.02661316469311714 | 0.9393134651322907 | 0.9300740163615115 | 0.9487383270415259 |
| 0.0041 | 3.2654284002396645 | 5450 | nan | nan | nan | nan |
| 0.0033 | 3.2953864589574597 | 5500 | nan | nan | nan | nan |
| 0.0042 | 3.3253445176752545 | 5550 | nan | nan | nan | nan |
| 0.0033 | 3.3553025763930497 | 5600 | nan | nan | nan | nan |
| nan | 3.3553025763930497 | 5600 | 0.02527858316898346 | 0.9387915764613265 | 0.9300058490933906 | 0.947744883767137 |
| 0.0016 | 3.385260635110845 | 5650 | nan | nan | nan | nan |
| 0.0027 | 3.4152186938286397 | 5700 | nan | nan | nan | nan |
| 0.0032 | 3.445176752546435 | 5750 | nan | nan | nan | nan |
| 0.0054 | 3.47513481126423 | 5800 | nan | nan | nan | nan |
| nan | 3.47513481126423 | 5800 | 0.026085887104272842 | 0.9393850083505256 | 0.9290711232024874 | 0.9499304589707928 |
| 0.0041 | 3.5050928699820254 | 5850 | nan | nan | nan | nan |
| 0.0031 | 3.53505092869982 | 5900 | nan | nan | nan | nan |
| 0.0043 | 3.5650089874176154 | 5950 | nan | nan | nan | nan |
| 0.0036 | 3.5949670461354106 | 6000 | nan | nan | nan | nan |
| nan | 3.5949670461354106 | 6000 | 0.026285560801625252 | 0.9379647749510763 | 0.9240408714092925 | 0.9523147228293265 |
| 0.003 | 3.6249251048532054 | 6050 | nan | nan | nan | nan |
| 0.0036 | 3.6548831635710006 | 6100 | nan | nan | nan | nan |
| 0.003 | 3.684841222288796 | 6150 | nan | nan | nan | nan |
| 0.0018 | 3.7147992810065906 | 6200 | nan | nan | nan | nan |
| nan | 3.7147992810065906 | 6200 | 0.026966776698827744 | 0.9423455332546242 | 0.9333463262521926 | 0.9515199682098152 |
| 0.002 | 3.744757339724386 | 6250 | nan | nan | nan | nan |
| 0.0031 | 3.774715398442181 | 6300 | nan | nan | nan | nan |
| 0.0026 | 3.804673457159976 | 6350 | nan | nan | nan | nan |
| 0.0032 | 3.834631515877771 | 6400 | nan | nan | nan | nan |
| nan | 3.834631515877771 | 6400 | 0.02665964514017105 | 0.9404937543031378 | 0.9312426957537983 | 0.9499304589707928 |
| 0.0046 | 3.8645895745955663 | 6450 | nan | nan | nan | nan |
| 0.0037 | 3.894547633313361 | 6500 | nan | nan | nan | nan |
| 0.006 | 3.9245056920311563 | 6550 | nan | nan | nan | nan |
| 0.0041 | 3.9544637507489515 | 6600 | nan | nan | nan | nan |
| nan | 3.9544637507489515 | 6600 | 0.025271492078900337 | 0.9411996066863323 | 0.9316721822075141 | 0.9509239022451818 |
| 0.0028 | 3.9844218094667463 | 6650 | nan | nan | nan | nan |
| 0.003 | 4.014379868184542 | 6700 | nan | nan | nan | nan |
| 0.0027 | 4.044337926902337 | 6750 | nan | nan | nan | nan |
| 0.0017 | 4.0742959856201315 | 6800 | nan | nan | nan | nan |
| nan | 4.0742959856201315 | 6800 | 0.026743704453110695 | 0.9429133858267716 | 0.934269553345036 | 0.951718656864693 |
| 0.0029 | 4.104254044337927 | 6850 | nan | nan | nan | nan |
| 0.0025 | 4.134212103055722 | 6900 | nan | nan | nan | nan |
| 0.0017 | 4.164170161773517 | 6950 | nan | nan | nan | nan |
| 0.0015 | 4.194128220491312 | 7000 | nan | nan | nan | nan |
| nan | 4.194128220491312 | 7000 | 0.026866145431995392 | 0.9408062930186825 | 0.9312828499124002 | 0.9505265249354262 |
| 0.0014 | 4.224086279209107 | 7050 | nan | nan | nan | nan |
| 0.0023 | 4.254044337926902 | 7100 | nan | nan | nan | nan |
| 0.0034 | 4.284002396644698 | 7150 | nan | nan | nan | nan |
| 0.0027 | 4.313960455362492 | 7200 | nan | nan | nan | nan |
| nan | 4.313960455362492 | 7200 | 0.026673471555113792 | 0.9432142505658893 | 0.9344773790951638 | 0.9521160341744487 |
| 0.001 | 4.343918514080288 | 7250 | nan | nan | nan | nan |
| 0.0016 | 4.373876572798083 | 7300 | nan | nan | nan | nan |
| 0.0061 | 4.403834631515878 | 7350 | nan | nan | nan | nan |
| 0.0015 | 4.433792690233673 | 7400 | nan | nan | nan | nan |
| nan | 4.433792690233673 | 7400 | 0.026809940114617348 | 0.9424722194906088 | 0.9330218068535826 | 0.9521160341744487 |
| 0.0022 | 4.463750748951468 | 7450 | nan | nan | nan | nan |
| 0.001 | 4.493708807669263 | 7500 | nan | nan | nan | nan |
| 0.0015 | 4.5236668663870585 | 7550 | nan | nan | nan | nan |
| 0.0019 | 4.553624925104853 | 7600 | nan | nan | nan | nan |
| nan | 4.553624925104853 | 7600 | 0.02733566425740719 | 0.9416846652267818 | 0.9307199689501261 | 0.9529107887939599 |
| 0.0012 | 4.583582983822648 | 7650 | nan | nan | nan | nan |
| 0.0025 | 4.613541042540444 | 7700 | nan | nan | nan | nan |
| 0.001 | 4.6434991012582385 | 7750 | nan | nan | nan | nan |
| 0.0015 | 4.673457159976033 | 7800 | nan | nan | nan | nan |
| nan | 4.673457159976033 | 7800 | 0.027779242023825645 | 0.9427390791027155 | 0.9337361138179692 | 0.9519173455195709 |
| 0.0021 | 4.703415218693829 | 7850 | nan | nan | nan | nan |
| 0.0031 | 4.733373277411624 | 7900 | nan | nan | nan | nan |
| 0.0017 | 4.7633313361294185 | 7950 | nan | nan | nan | nan |
| 0.0022 | 4.793289394847214 | 8000 | nan | nan | nan | nan |
| nan | 4.793289394847214 | 8000 | 0.02728326804935932 | 0.9410609037328094 | 0.9306392073052263 | 0.951718656864693 |
| 0.0023 | 4.823247453565009 | 8050 | nan | nan | nan | nan |
| 0.0019 | 4.853205512282804 | 8100 | nan | nan | nan | nan |
| 0.0011 | 4.883163571000599 | 8150 | nan | nan | nan | nan |
| 0.0028 | 4.913121629718394 | 8200 | nan | nan | nan | nan |
| nan | 4.913121629718394 | 8200 | 0.02626235969364643 | 0.9436411920920625 | 0.9343591741332294 | 0.9531094774488377 |
| 0.002 | 4.943079688436189 | 8250 | nan | nan | nan | nan |
| 0.0013 | 4.973037747153985 | 8300 | nan | nan | nan | nan |
| 0.0023 | 5.002995805871779 | 8350 | nan | nan | nan | nan |
| 0.001 | 5.032953864589574 | 8400 | nan | nan | nan | nan |
| nan | 5.032953864589574 | 8400 | 0.02664945460855961 | 0.9427953607234125 | 0.9328924333787201 | 0.9529107887939599 |
| 0.0008 | 5.06291192330737 | 8450 | nan | nan | nan | nan |
| 0.001 | 5.092869982025165 | 8500 | nan | nan | nan | nan |
| 0.0019 | 5.12282804074296 | 8550 | nan | nan | nan | nan |
| 0.0016 | 5.152786099460755 | 8600 | nan | nan | nan | nan |
| nan | 5.152786099460755 | 8600 | 0.02724417671561241 | 0.942354905234214 | 0.9316504854368932 | 0.9533081661037155 |
| 0.001 | 5.18274415817855 | 8650 | nan | nan | nan | nan |
| 0.001 | 5.2127022168963455 | 8700 | nan | nan | nan | nan |
| 0.0008 | 5.24266027561414 | 8750 | nan | nan | nan | nan |
| 0.0009 | 5.272618334331935 | 8800 | nan | nan | nan | nan |
| nan | 5.272618334331935 | 8800 | 0.027002455666661263 | 0.9444772593030125 | 0.936 | 0.9531094774488377 |
| 0.0015 | 5.302576393049731 | 8850 | nan | nan | nan | nan |
| 0.0027 | 5.3325344517675255 | 8900 | nan | nan | nan | nan |
| 0.0015 | 5.36249251048532 | 8950 | nan | nan | nan | nan |
| 0.0018 | 5.392450569203116 | 9000 | nan | nan | nan | nan |
| nan | 5.392450569203116 | 9000 | 0.026851218193769455 | 0.9485511531638084 | 0.941130451789556 | 0.9560898072720048 |
| 0.0011 | 5.422408627920911 | 9050 | nan | nan | nan | nan |
| 0.001 | 5.4523666866387055 | 9100 | nan | nan | nan | nan |
| 0.0009 | 5.482324745356501 | 9150 | nan | nan | nan | nan |
| 0.0022 | 5.512282804074296 | 9200 | nan | nan | nan | nan |
| nan | 5.512282804074296 | 9200 | 0.026996750384569168 | 0.9460842188114915 | 0.9370493081270708 | 0.9552950526524936 |
| 0.0018 | 5.542240862792091 | 9250 | nan | nan | nan | nan |
| 0.0008 | 5.572198921509886 | 9300 | nan | nan | nan | nan |
| 0.0017 | 5.602156980227681 | 9350 | nan | nan | nan | nan |
| 0.0009 | 5.632115038945477 | 9400 | nan | nan | nan | nan |
| nan | 5.632115038945477 | 9400 | 0.02744028903543949 | 0.9479802955665025 | 0.9401993355481728 | 0.955891118617127 |
| 0.0012 | 5.662073097663272 | 9450 | nan | nan | nan | nan |
| 0.0016 | 5.692031156381066 | 9500 | nan | nan | nan | nan |
| 0.0007 | 5.721989215098862 | 9550 | nan | nan | nan | nan |
| 0.0006 | 5.751947273816657 | 9600 | nan | nan | nan | nan |
| nan | 5.751947273816657 | 9600 | 0.027505146339535713 | 0.9460523725142744 | 0.937560975609756 | 0.9546989866878601 |
| 0.0007 | 5.781905332534452 | 9650 | nan | nan | nan | nan |
| 0.0007 | 5.811863391252247 | 9700 | nan | nan | nan | nan |
| 0.0007 | 5.841821449970042 | 9750 | nan | nan | nan | nan |
| 0.001 | 5.871779508687837 | 9800 | nan | nan | nan | nan |
| nan | 5.871779508687837 | 9800 | 0.02771810069680214 | 0.9461561177281229 | 0.9375731564572767 | 0.954897675342738 |
| 0.0011 | 5.9017375674056325 | 9850 | nan | nan | nan | nan |
| 0.001 | 5.931695626123427 | 9900 | nan | nan | nan | nan |
| 0.0008 | 5.961653684841222 | 9950 | nan | nan | nan | nan |
| 0.002 | 5.991611743559018 | 10000 | nan | nan | nan | nan |
| nan | 5.991611743559018 | 10000 | 0.027671782299876213 | 0.946808510638298 | 0.9388552451650714 | 0.954897675342738 |
| nan | 6.0 | 10014 | nan | nan | nan | nan |
## Framework versions
- Transformers: 2.3.0
- Pytorch: (siehe Umgebung)
- Datasets: (siehe Umgebung)
- Tokenizers: (siehe Umgebung)
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.5_epoch1 | MinaMila | 2025-06-16T07:07:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T07:05:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LarryAIDraw/Genshin_Yae_Miko | LarryAIDraw | 2025-06-16T07:04:33Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-16T06:22:38Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/1662698/genshin-yae-miko |
Hectore/80s_commercial_screenshot | Hectore | 2025-06-16T07:04:12Z | 222 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-05-26T10:27:37Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/80S_COMMERCIALS_SCREENSHOT_e000001_00_20250524094529.png
- text: '-'
output:
url: images/80S_COMMERCIALS_SCREENSHOT_e000006_02_20250524124152.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 80scommercial
license: mit
---
# 80S COMMERCIAL SCREENSHOT
<Gallery />
## Model description
LoRA for Flux designed to generate screenshots inspired by 1980s commercials. Features low-resolution images, retro graphics, overlaid text, and color palettes typical of the TV era. Ideal for creating nostalgic, vintage broadcast-style scenes.
## Trigger words
You should use `80scommercial` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Hectore/80s_commercial_screenshot/tree/main) them in the Files & versions tab.
|
Wilbur1240/ppo-SnowballTarget | Wilbur1240 | 2025-06-16T07:03:43Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-06-16T07:03:34Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Wilbur1240/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
GAGABIG/CNN | GAGABIG | 2025-06-16T07:02:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-13T09:57:53Z | ---
license: apache-2.0
---
|
BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc | BootesVoid | 2025-06-16T07:00:50Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-16T07:00:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EMILY01
---
# Cmbyjnk1403Xvrdqsg2Kyovgu_Cmbypcpjv048Qrdqs299Msggc
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EMILY01` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EMILY01",
"lora_weights": "https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc', weight_name='lora.safetensors')
image = pipeline('EMILY01').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbyjnk1403xvrdqsg2kyovgu_cmbypcpjv048qrdqs299msggc/discussions) to add images that show off what youโve made with this LoRA.
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.15_0.75_epoch1 | MinaMila | 2025-06-16T07:00:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:58:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Baron-qui/distilhubert-finetuned-gtzan | Baron-qui | 2025-06-16T06:59:19Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-06-14T00:12:46Z | ---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5618
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8647 | 1.0 | 150 | 1.7671 | 0.58 |
| 1.0983 | 2.0 | 300 | 1.1722 | 0.65 |
| 0.875 | 3.0 | 450 | 0.9809 | 0.73 |
| 0.596 | 4.0 | 600 | 0.9323 | 0.75 |
| 0.4549 | 5.0 | 750 | 0.6444 | 0.82 |
| 0.1644 | 6.0 | 900 | 0.5420 | 0.85 |
| 0.136 | 7.0 | 1050 | 0.5333 | 0.82 |
| 0.1289 | 8.0 | 1200 | 0.6917 | 0.82 |
| 0.029 | 9.0 | 1350 | 0.5613 | 0.85 |
| 0.0409 | 10.0 | 1500 | 0.5618 | 0.84 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Anni0401/Olaf_Scholz_Llama-8B_v0.1 | Anni0401 | 2025-06-16T06:59:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"region:us"
] | null | 2025-06-16T06:56:22Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
numiros/Comma-Epsilon-v0.1-exl2 | numiros | 2025-06-16T06:55:19Z | 0 | 0 | null | [
"exl2",
"base_model:numiros/Comma-Epsilon-v0.1",
"base_model:finetune:numiros/Comma-Epsilon-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T05:47:23Z | ---
license: apache-2.0
base_model:
- numiros/Comma-Epsilon-v0.1
tags:
- exl2
---
[4bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl2/tree/4bpw)
[5bpw](https://huggingface.co/numiros/Comma-Epsilon-v0.1-exl2/tree/5bpw) |
John6666/satyr-remix-ankara-illustrious-v17-sdxl | John6666 | 2025-06-16T06:55:10Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fantasy",
"paintery",
"styles",
"prompt comphrehension",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-16T06:49:23Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fantasy
- paintery
- styles
- prompt comphrehension
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/974951?modelVersionId=1905968).
This model created by [Labdoge207](https://civitai.com/user/Labdoge207).
|
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.05_0.75_epoch1 | MinaMila | 2025-06-16T06:54:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:52:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.05_epoch2 | MinaMila | 2025-06-16T06:52:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:50:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar | MalvinasMan | 2025-06-16T06:50:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am arctic opaque jaguar",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-29T13:03:46Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am arctic opaque jaguar
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MalvinasMan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-arctic_opaque_jaguar", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rafaelrocha1214/msa0o | rafaelrocha1214 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
davidamaral1029/msa0o | davidamaral1029 | 2025-06-16T06:49:42Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-06-16T06:49:42Z | ---
license: bigcode-openrail-m
---
|
westlake-repl/SaProt_1.3B_AFDB_OMG_NCBI | westlake-repl | 2025-06-16T06:48:45Z | 197 | 0 | null | [
"pytorch",
"safetensors",
"esm",
"license:mit",
"region:us"
] | null | 2025-05-06T12:26:20Z | ---
license: mit
---
We further trained a 1.3 billion parameter version of the SaProt model, setting the context length to 1536 during training, and used a combined dataset of AFDB, OMG_prot50, and NCBI (70% identity filtering), totaling 383 million sequences. The training strategy is similar to that of [SaProt-O](https://github.com/westlake-repl/Denovo-Pinal/wiki/Tutorial), employing multimodal input integration (sequence and structural data) to ensure better alignment with real-world research applications.
Specifically, the training data is a mixture of UniRef50 (40%), OMG (30%), and NCBI (30%).
For sequences from OMG and NCBI lacking corresponding structural information, we employ mask language modeling where the model predicts the masked amino acid tokens.
For the UniRef50 dataset, which includes structural data, we applied four distinct training strategies, each sampled with equal probability (25%):
- Predicting all amino acid tokens given partial masked structural tokens.
- Predicting all amino acid tokens given complete structural tokens.
- Predicting partial amino acid tokens given their amino acid token context and partial masked structural tokens.
- Predicting partial amino acid tokens given their amino acid token context and complete structural tokens.
SaProt_1.3B_AFDB_OMG_NCBI is also a model very useful for protein editing. For instance, if you wish to modify certain regions of your proteinโwhether natural proteins or de novo designedโyou can easily mask these amino acids by inputting partial or complete structures. Remarkably, the model functions effectively even if only sequence data is provided. If you have text data and would like to incorporate it, please refer to [SaProt-T/O](http://113.45.254.183:9527/). The relevant link can be found in the interface of [Pinal](http://www.denovo-pinal.com/).
### Loading model from huggingface
> SaProt_1.3B_AFDB_OMG_NCBI, unlike [SaProt_650M_AF2](https://huggingface.co/westlake-repl/SaProt_650M_AF2), **does not support** loading from the esm library
The following code shows how to load the model.
```
from transformers import EsmTokenizer, EsmForMaskedLM
model_path = "/your/path/to/SaProt_1.3B_AFDB_OMG_NCBI"
tokenizer = EsmTokenizer.from_pretrained(model_path)
model = EsmForMaskedLM.from_pretrained(model_path)
#################### Example ####################
device = "cuda"
model.to(device)
seq = "M#EvVpQpL#VyQdYaKv" # Here "#" represents lower plDDT regions (plddt < 70)
tokens = tokenizer.tokenize(seq)
print(tokens)
inputs = tokenizer(seq, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model(**inputs)
print(outputs.logits.shape)
"""
['M#', 'Ev', 'Vp', 'Qp', 'L#', 'Vy', 'Qd', 'Ya', 'Kv']
torch.Size([1, 11, 446])
"""
``` |
Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF | Alptekinege | 2025-06-16T06:45:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:45:19Z | ---
license: apache-2.0
base_model: huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- moe
- llama-cpp
- gguf-my-repo
extra_gated_prompt: '**Usage Warnings**
โ**Risk of Sensitive or Controversial Outputs**โ: This modelโs safety filtering
has been significantly reduced, potentially generating sensitive, controversial,
or inappropriate content. Users should exercise caution and rigorously review generated
outputs.
โ**Not Suitable for All Audiences**:โ Due to limited content filtering, the modelโs
outputs may be inappropriate for public settings, underage users, or applications
requiring high security.
โ**Legal and Ethical Responsibilities**โ: Users must ensure their usage complies
with local laws and ethical standards. Generated content may carry legal or ethical
risks, and users are solely responsible for any consequences.
โ**Research and Experimental Use**โ: It is recommended to use this model for research,
testing, or controlled environments, avoiding direct use in production or public-facing
commercial applications.
โ**Monitoring and Review Recommendations**โ: Users are strongly advised to monitor
model outputs in real-time and conduct manual reviews when necessary to prevent
the dissemination of inappropriate content.
โ**No Default Safety Guarantees**โ: Unlike standard models, this model has not undergone
rigorous safety optimization. huihui.ai bears no responsibility for any consequences
arising from its use.'
---
# Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-1.5B-A0.6B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF --hf-file huihui-moe-1.5b-a0.6b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF --hf-file huihui-moe-1.5b-a0.6b-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF --hf-file huihui-moe-1.5b-a0.6b-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Alptekinege/Huihui-MoE-1.5B-A0.6B-abliterated-Q6_K-GGUF --hf-file huihui-moe-1.5b-a0.6b-abliterated-q6_k.gguf -c 2048
```
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.05_epoch1 | MinaMila | 2025-06-16T06:44:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:42:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phucminh/deepseek-finetuned_new | phucminh | 2025-06-16T06:40:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-16T06:38:06Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** phucminh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
prithivMLmods/visionOCR-3B-061125 | prithivMLmods | 2025-06-16T06:38:00Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"text-generation-inference",
"OCR",
"Receipt",
"VisionOCR",
"Messy Handwriting OCR",
"conversational",
"en",
"zh",
"dataset:linxy/LaTeX_OCR",
"dataset:mychen76/ds_receipts_v2_eval",
"dataset:mychen76/invoices-and-receipts_ocr_v1",
"dataset:prithivMLmods/Latex-KIE",
"arxiv:2412.08746",
"arxiv:2309.00071",
"arxiv:2409.12191",
"arxiv:2308.12966",
"arxiv:2412.02210",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-06-11T15:09:35Z | ---
license: apache-2.0
language:
- en
- zh
tags:
- text-generation-inference
- OCR
- Receipt
- VisionOCR
- Messy Handwriting OCR
datasets:
- linxy/LaTeX_OCR
- mychen76/ds_receipts_v2_eval
- mychen76/invoices-and-receipts_ocr_v1
- prithivMLmods/Latex-KIE
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: image-text-to-text
library_name: transformers
---

# **visionOCR-3B-061125**
> The **visionOCR-3B-061125** model is a fine-tuned version of **Qwen/Qwen2.5-VL-3B-Instruct**, optimized for **Document-Level Optical Character Recognition (OCR)**, **long-context vision-language understanding**, and **accurate image-to-text conversion with mathematical LaTeX formatting**. Built on top of the Qwen2.5-VL architecture, this model significantly improves document comprehension, structured data extraction, and visual reasoning across diverse input formats.
# Key Enhancements
* **Advanced Document-Level OCR**: Capable of extracting structured content from complex, multi-page documents such as invoices, academic papers, forms, and scanned reports.
* **Enhanced Long-Context Vision-Language Understanding**: Designed to handle dense document layouts, long sequences of embedded text, tables, and diagrams with coherent cross-reference understanding.
* **State-of-the-Art Performance Across Resolutions**: Achieves competitive results on OCR and visual QA benchmarks such as DocVQA, MathVista, RealWorldQA, and MTVQA.
* **Video Understanding up to 20+ minutes**: Supports detailed comprehension of long-duration videos for content summarization, Q\&A, and multi-modal reasoning.
* **Visually-Grounded Device Interaction**: Enables mobile/robotic device operation via visual inputs and text-based instructions using contextual understanding and decision-making logic.
# Quick Start with Transformers
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"prithivMLmods/visionOCR-3B-061125", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("prithivMLmods/visionOCR-3B-061125")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
# Intended Use
This model is intended for:
* High-fidelity OCR from documents, forms, receipts, and printed or scanned materials.
* Image and document-based question answering for educational and enterprise applications.
* Extraction and LaTeX formatting of mathematical expressions from printed or handwritten content.
* Retrieval and summarization from long documents, slides, and multi-modal inputs.
* Multilingual OCR and structured content extraction for global use cases.
* Robotic or mobile automation with vision-guided contextual interaction.
# Limitations
* May show degraded performance on extremely low-quality or occluded images.
* Not optimized for real-time applications on low-resource or edge devices due to computational demands.
* Variable accuracy on uncommon or low-resource languages/scripts.
* Long video processing may require substantial memory and is not optimized for streaming applications.
* Visual token settings affect performance; suboptimal configurations can impact results.
* In rare cases, outputs may contain hallucinated or contextually misaligned information.
## References
* **DocVLM: Make Your VLM an Efficient Reader**
[https://arxiv.org/pdf/2412.08746v1](https://arxiv.org/pdf/2412.08746v1)
* **YaRN: Efficient Context Window Extension of Large Language Models**
[https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071)
* **Qwen2-VL: Enhancing Vision-Language Modelโs Perception of the World at Any Resolution**
[https://arxiv.org/pdf/2409.12191](https://arxiv.org/pdf/2409.12191)
* **Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond**
[https://arxiv.org/pdf/2308.12966](https://arxiv.org/pdf/2308.12966)
* **A Comprehensive and Challenging OCR Benchmark for Evaluating Large Multimodal Models in Literacy**
[https://arxiv.org/pdf/2412.02210](https://arxiv.org/pdf/2412.02210) |
prithivMLmods/visionOCR-3B-061125-GGUF | prithivMLmods | 2025-06-16T06:37:56Z | 131 | 0 | transformers | [
"transformers",
"gguf",
"qwen2_5_vl",
"text-generation-inference",
"OCR",
"image-text-to-text",
"en",
"base_model:prithivMLmods/visionOCR-3B-061125",
"base_model:quantized:prithivMLmods/visionOCR-3B-061125",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-06-12T05:09:58Z | ---
license: apache-2.0
tags:
- text-generation-inference
- OCR
base_model:
- prithivMLmods/visionOCR-3B-061125
language:
- en
pipeline_tag: image-text-to-text
library_name: transformers
---
# **visionOCR-3B-061125-GGUF**
> The **visionOCR-3B-061125** model is a fine-tuned version of **Qwen/Qwen2.5-VL-3B-Instruct**, optimized for **Document-Level Optical Character Recognition (OCR)**, **long-context vision-language understanding**, and **accurate image-to-text conversion with mathematical LaTeX formatting**. Built on top of the Qwen2.5-VL architecture, this model significantly improves document comprehension, structured data extraction, and visual reasoning across diverse input formats.
## Model Files
| File Name | Size | Format | Description |
|-----------|------|--------|-------------|
| visionOCR-3B-061125-BF16.gguf | 6.18 GB | BF16 | Brain floating point 16-bit |
| visionOCR-3B-061125-Q6_K.gguf | 2.54 GB | Q6_K | 6-bit quantized |
| visionOCR-3B-061125-Q5_K_M.gguf | 2.22 GB | Q5_K_M | 5-bit quantized, medium quality |
| visionOCR-3B-061125-Q4_K_M.gguf | 1.93 GB | Q4_K_M | 4-bit quantized, medium quality |
| visionOCR-3B-061125-Q3_K_M.gguf | 1.59 GB | Q3_K_M | 3-bit quantized, medium quality |
| visionOCR-3B-061125-Q3_K_S.gguf | 1.45 GB | Q3_K_S | 3-bit quantized, small quality |
| visionOCR-3B-061125-Q2_K.gguf | 1.27 GB | Q2_K | 2-bit quantized |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
 |
dgambettaphd/M_llm2_run2_gen10_WXS_doc1000_synt64_lr1e-04_acm_FRESH | dgambettaphd | 2025-06-16T06:37:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T06:37:35Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF | Triangle104 | 2025-06-16T06:36:03Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T06:33:11Z | ---
license: apache-2.0
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
---
Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with Venice.ai with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem.
Dolphin Mistral 24B Venice Edition is now live on https://venice.ai/ as โVenice Uncensored,โ the new default model for all Venice users.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF --hf-file dolphin-mistral-24b-venice-edition-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF --hf-file dolphin-mistral-24b-venice-edition-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF --hf-file dolphin-mistral-24b-venice-edition-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Dolphin-Mistral-24B-Venice-Edition-Q8_0-GGUF --hf-file dolphin-mistral-24b-venice-edition-q8_0.gguf -c 2048
```
|
ka-ops/Meta-Llama-3.1-8B-Instruct-FP8 | ka-ops | 2025-06-16T06:35:16Z | 0 | 0 | null | [
"safetensors",
"llama",
"fp8",
"vllm",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"compressed-tensors",
"region:us"
] | text-generation | 2025-06-16T06:24:00Z | ---
tags:
- fp8
- vllm
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
pipeline_tag: text-generation
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
---
# Meta-Llama-3.1-8B-Instruct-FP8
## Model Overview
- **Model Architecture:** Meta-Llama-3.1
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Intended Use Cases:** Intended for commercial and research use in multiple languages. Similarly to [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), this models is intended for assistant-like chat.
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- **Release Date:** 7/23/2024
- **Version:** 1.0
- **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
- **Model Developers:** Neural Magic
Quantized version of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It achieves an average score of 73.44 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 73.79.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) to FP8 data type, ready for inference with vLLM built from source.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations.
[LLM Compressor](https://github.com/vllm-project/llm-compressor) is used for quantization with 512 sequences of UltraChat.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8"
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, tokenize=False)
llm = LLM(model=model_id)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created by applying [LLM Compressor with calibration samples from UltraChat](https://github.com/vllm-project/llm-compressor/blob/sa/big_model_support/examples/big_model_offloading/big_model_w8a8_calibrate.py), as presented in the code snipet below.
```python
import torch
from datasets import load_dataset
from transformers import AutoTokenizer
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
from llmcompressor.transformers.compression.helpers import (
calculate_offload_device_map,
custom_offload_device_map,
)
recipe = """
quant_stage:
quant_modifiers:
QuantizationModifier:
ignore: ["lm_head"]
config_groups:
group_0:
weights:
num_bits: 8
type: float
strategy: tensor
dynamic: false
symmetric: true
input_activations:
num_bits: 8
type: float
strategy: tensor
dynamic: false
symmetric: true
targets: ["Linear"]
"""
model_stub = "meta-llama/Meta-Llama-3.1-8B-Instruct"
model_name = model_stub.split("/")[-1]
device_map = calculate_offload_device_map(
model_stub, reserve_for_hessians=False, num_gpus=1, torch_dtype="auto"
)
model = SparseAutoModelForCausalLM.from_pretrained(
model_stub, torch_dtype="auto", device_map=device_map
)
tokenizer = AutoTokenizer.from_pretrained(model_stub)
output_dir = f"./{model_name}-FP8"
DATASET_ID = "HuggingFaceH4/ultrachat_200k"
DATASET_SPLIT = "train_sft"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 4096
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
def preprocess(example):
return {
"text": tokenizer.apply_chat_template(
example["messages"],
tokenize=False,
)
}
ds = ds.map(preprocess)
def tokenize(sample):
return tokenizer(
sample["text"],
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
add_special_tokens=False,
)
ds = ds.map(tokenize, remove_columns=ds.column_names)
oneshot(
model=model,
output_dir=output_dir,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
save_compressed=True,
)
```
## Evaluation
The model was evaluated on MMLU, ARC-Challenge, GSM-8K, Hellaswag, Winogrande and TruthfulQA.
Evaluation was conducted using the Neural Magic fork of [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness/tree/llama_3.1_instruct) (branch llama_3.1_instruct) and the [vLLM](https://docs.vllm.ai/en/stable/) engine.
This version of the lm-evaluation-harness includes versions of ARC-Challenge, GSM-8K, MMLU, and MMLU-cot that match the prompting style of [Meta-Llama-3.1-Instruct-evals](https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals).
### Accuracy
#### Open LLM Leaderboard evaluation scores
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Meta-Llama-3.1-8B-Instruct </strong>
</td>
<td><strong>Meta-Llama-3.1-8B-Instruct-FP8(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>67.95
</td>
<td>67.97
</td>
<td>100.0%
</td>
</tr>
<tr>
<td>MMLU-cot (0-shot)
</td>
<td>71.24
</td>
<td>71.12
</td>
<td>99.83%
</td>
</tr>
<tr>
<td>ARC Challenge (0-shot)
</td>
<td>82.00
</td>
<td>81.66
</td>
<td>99.59%
</td>
</tr>
<tr>
<td>GSM-8K-cot (8-shot, strict-match)
</td>
<td>81.96
</td>
<td>81.12
</td>
<td>98.98%
</td>
</tr>
<tr>
<td>Hellaswag (10-shot)
</td>
<td>80.46
</td>
<td>80.4
</td>
<td>99.93%
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>78.45
</td>
<td>77.90
</td>
<td>99.30%
</td>
</tr>
<tr>
<td>TruthfulQA (0-shot, mc2)
</td>
<td>54.50
</td>
<td>53.92
</td>
<td>98.94%
</td>
</tr>
<tr>
<td><strong>Average</strong>
</td>
<td><strong>73.79</strong>
</td>
<td><strong>73.44</strong>
</td>
<td><strong>99.52%</strong>
</td>
</tr>
</table>
### Reproduction
The results were obtained using the following commands:
#### MMLU
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks mmlu \
--num_fewshot 5 \
--batch_size auto
```
#### MMLU-cot
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks mmlu_cot_0shot_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### ARC-Challenge
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks arc_challenge_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
```
#### GSM-8K
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks gsm8k_cot_llama_3.1_instruct \
--apply_chat_template \
--fewshot_as_multiturn \
--num_fewshot 8 \
--batch_size auto
```
#### Hellaswag
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks hellaswag \
--num_fewshot 10 \
--batch_size auto
```
#### Winogrande
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks winogrande \
--num_fewshot 5 \
--batch_size auto
```
#### TruthfulQA
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \
--tasks truthfulqa \
--num_fewshot 0 \
--batch_size auto
``` |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.15_epoch2 | MinaMila | 2025-06-16T06:33:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:31:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kaonan/my-gemma-2-finetuned-model | kaonan | 2025-06-16T06:33:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-06-16T06:28:11Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrpeerat/new_model | mrpeerat | 2025-06-16T06:32:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:aisingapore/Gemma-SEA-LION-v3-9B",
"base_model:merge:aisingapore/Gemma-SEA-LION-v3-9B",
"base_model:aisingapore/Gemma-SEA-LION-v3-9B-IT",
"base_model:merge:aisingapore/Gemma-SEA-LION-v3-9B-IT",
"base_model:google/gemma-2-9b-it",
"base_model:merge:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:29:10Z | ---
base_model:
- aisingapore/Gemma-SEA-LION-v3-9B
- aisingapore/Gemma-SEA-LION-v3-9B-IT
- google/gemma-2-9b-it
library_name: transformers
tags:
- mergekit
- merge
---
# linear_model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method using [aisingapore/Gemma-SEA-LION-v3-9B-IT](https://huggingface.co/aisingapore/Gemma-SEA-LION-v3-9B-IT) as a base.
### Models Merged
The following models were included in the merge:
* /mnt/weka/aisg/peerat/LLaMA-Factory/Wangchanlion-gemma2-wangchanxFull-Syn120k-1e4-full
* [aisingapore/Gemma-SEA-LION-v3-9B](https://huggingface.co/aisingapore/Gemma-SEA-LION-v3-9B)
* [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: aisingapore/Gemma-SEA-LION-v3-9B
parameters:
weight: 1.0
density: 1
- model: /mnt/weka/aisg/peerat/LLaMA-Factory/Wangchanlion-gemma2-wangchanxFull-Syn120k-1e4-full
parameters:
weight: 1.0
density: 1
- model: google/gemma-2-9b-it
parameters:
weight: 1.0
density: 1
merge_method: linear
base_model: aisingapore/Gemma-SEA-LION-v3-9B-IT
parameters:
# t: [0, 0.5, 1, 0.5, 0]
weight: 1.0
density: 1
normalize: true
int8_mask: true
tokenizer:
source: aisingapore/Gemma-SEA-LION-v3-9B-IT
dtype: bfloat16
```
|
prithivMLmods/Ross-640-BMath-1.5B | prithivMLmods | 2025-06-16T06:31:02Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"math",
"trl",
"SFT",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T12:13:52Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
- trl
- SFT
---

# **Ross-640-BMath-1.5B**
> **Ross-640-BMath-1.5B** is an **experimental, high-precision math explanation model** fine-tuned on **Qwen2-1.5B**, designed to provide **step-by-step mathematical derivations** and **detailed concept explanations** across a wide range of mathematical domains. It is **not optimized for general reasoning or conversation**, and focuses primarily on **structured, non-reasoning math workflows** including algebra, calculus, number theory, and combinatorics.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Ross-640-BMath-1.5B-GGUF](https://huggingface.co/prithivMLmods/Ross-640-BMath-1.5B-GGUF)
---
## **Key Features**
1. **Hard Math Concept Focus**
Specializes in **algebra**, **calculus**, **combinatorics**, **linear algebra**, **number theory**, and moreโdelivering fine-tuned, low-latency outputs ideal for **math-intensive applications**.
2. **Step-by-Step Explanations**
Emphasizes **procedural clarity** over abstract reasoning, offering structured, educational breakdowns of mathematical problems and derivations.
3. **Symbolic Computation & Annotation**
Outputs include LaTeX-compatible syntax, inline math symbols, and clear annotation to support academic and technical workflows.
4. **Educational Utility**
Optimized for **learning and teaching**, providing clear responses to mathematical queries with minimal noise or conversational drift.
5. **Lightweight Architecture**
Built on Qwen2-1.5B and fine-tuned for **efficiency and precision**, making it suitable for deployment in **resource-constrained environments**, educational tools, or math-centric chat interfaces.
---
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Ross-640-BMath-1.5B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain step-by-step how to integrate (x^2 + 1)/(x^3 + 3x) dx."
messages = [
{"role": "system", "content": "You are a helpful assistant skilled in solving complex math problems with clear and structured steps."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## **Intended Use**
* Detailed mathematical explanations and problem-solving
* Education-focused tutoring and math derivation tools
* Math-focused applications and formula documentation
* Symbolic derivations and LaTeX generation
* Integration with learning platforms and academic software
---
## **Limitations**
* Not suitable for general-purpose conversation or reasoning tasks
* Context length constraints may limit effectiveness on large proofs
* May struggle with non-mathematical or open-ended creative tasks
* Experimental: Fine-tuned primarily for **explanation clarity**, not deep symbolic reasoning or formal proof validation |
prithivMLmods/Ross-640-BMath-1.5B-GGUF | prithivMLmods | 2025-06-16T06:30:52Z | 215 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"math",
"text-generation",
"en",
"base_model:prithivMLmods/Ross-640-BMath-1.5B",
"base_model:quantized:prithivMLmods/Ross-640-BMath-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T12:19:02Z | ---
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Ross-640-BMath-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
---
# **Ross-640-BMath-1.5B-GGUF**
> **Ross-640-BMath-1.5B** is an **experimental, high-precision math explanation model** fine-tuned on **Qwen2-1.5B**, designed to provide **step-by-step mathematical derivations** and **detailed concept explanations** across a wide range of mathematical domains. It is **not optimized for general reasoning or conversation**, and focuses primarily on **structured, non-reasoning math workflows** including algebra, calculus, number theory, and combinatorics.
## Model Files
| File Name | Size | Format | Description |
|-----------|------|--------|-------------|
| Ross-640-BMath-1.5B.F32.gguf | 6.18 GB | F32 | Full precision 32-bit floating point |
| Ross-640-BMath-1.5B.F16.gguf | 3.09 GB | F16 | Half precision 16-bit floating point |
| Ross-640-BMath-1.5B.BF16.gguf | 3.09 GB | BF16 | Brain floating point 16-bit |
| Ross-640-BMath-1.5B.Q8_0.gguf | 1.65 GB | Q8_0 | 8-bit quantized |
| Ross-640-BMath-1.5B.Q6_K.gguf | 1.27 GB | Q6_K | 6-bit quantized |
| Ross-640-BMath-1.5B.Q5_K_M.gguf | 1.13 GB | Q5_K_M | 5-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q5_K_S.gguf | 1.1 GB | Q5_K_S | 5-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q4_K_M.gguf | 986 MB | Q4_K_M | 4-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q4_K_S.gguf | 940 MB | Q4_K_S | 4-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q3_K_L.gguf | 880 MB | Q3_K_L | 3-bit quantized, large quality |
| Ross-640-BMath-1.5B.Q3_K_M.gguf | 824 MB | Q3_K_M | 3-bit quantized, medium quality |
| Ross-640-BMath-1.5B.Q3_K_S.gguf | 761 MB | Q3_K_S | 3-bit quantized, small quality |
| Ross-640-BMath-1.5B.Q2_K.gguf | 676 MB | Q2_K | 2-bit quantized |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
 |
himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824 | himedia | 2025-06-16T06:30:44Z | 0 | 0 | null | [
"safetensors",
"financial",
"credit-rating",
"korean",
"gemma",
"unsloth",
"fine-tuned",
"text-generation",
"conversational",
"ko",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T06:30:36Z | ---
language: ko
license: apache-2.0
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- financial
- credit-rating
- korean
- gemma
- unsloth
- fine-tuned
model_name: FinCreditLlama-3.2-3B
pipeline_tag: text-generation
---
# FinCreditLlama-3.2-3B
## ๋ชจ๋ธ ๊ฐ์
FinCreditLlama-3.2-3B๋ ๊ธ์ต ์ ์ฉ ํ๊ฐ๋ฅผ ์ํด ํน๋ณํ ์ค๊ณ๋ ํ๊ตญ์ด ์ธ์ด ๋ชจ๋ธ์
๋๋ค.
**๋ฒ ์ด์ค ๋ชจ๋ธ**: unsloth/Llama-3.2-3B-Instruct
**๋ฐ์ดํฐ์
**: himedia/financial_dummy_data_v2
**ํ์ต ๋ฐฉ๋ฒ**: LoRA (Low-Rank Adaptation)
**ํ์ต ์ผ์**: 20250616_062824
## ํ์ดํผํ๋ผ๋ฏธํฐ
- **Learning Rate**: 5e-05
- **Max Steps**: 10
- **Batch Size**: 2
- **Gradient Accumulation**: 4
- **LoRA r**: 16
- **LoRA alpha**: 16
- **Max Sequence Length**: 2048
- **Warmup Steps**: 5
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ๋ก๋
tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824")
model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824")
# ๊ฐ๋จํ ์ถ๋ก ์์
prompt = "๊ณ ๊ฐ์ ์ ์ฉ๋ฑ๊ธ์ ํ๊ฐํด์ฃผ์ธ์:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ๋ ํฌ์งํ ๋ฆฌ๋ช
๊ตฌ์ฑ
```
fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824 = fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062824
```
- `fincredit-gemma3-4b`: ๋ชจ๋ธ ๊ธฐ๋ณธ๋ช
- `lr5e05`: Learning Rate
- `bs2`: Batch Size
- `r16`: LoRA rank
- `steps10`: ํ์ต ์คํ
- `20250616_062824`: ํ์ต ์๊ฐ
## ์ฑ๋ฅ
์ด ๋ชจ๋ธ์ ํ๊ตญ์ด ๊ธ์ต ํ
์คํธ์ ๋ํด ํ์ธํ๋๋์ด ์ ์ฉ ํ๊ฐ ๊ด๋ จ ์ง์์๋ต์ ํนํ๋์ด ์์ต๋๋ค.
## ๋ผ์ด์ ์ค
Apache 2.0
|
himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062452 | himedia | 2025-06-16T06:30:40Z | 0 | 0 | null | [
"safetensors",
"gemma3",
"financial",
"credit-rating",
"korean",
"unsloth",
"fine-tuned",
"complete-model",
"text-generation",
"conversational",
"ko",
"base_model:unsloth/gemma-3-4b-it",
"base_model:finetune:unsloth/gemma-3-4b-it",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T06:29:12Z | ---
language: ko
license: apache-2.0
base_model: unsloth/gemma-3-4b-it
tags:
- financial
- credit-rating
- korean
- gemma3
- unsloth
- fine-tuned
- complete-model
model_name: FinCreditGemma-3-4B
pipeline_tag: text-generation
---
# FinCreditGemma-3-4B
## ๋ชจ๋ธ ๊ฐ์
FinCreditGemma-3-4B๋ ๊ธ์ต ์ ์ฉ ํ๊ฐ๋ฅผ ์ํด ํน๋ณํ ์ค๊ณ๋ ํ๊ตญ์ด ์ธ์ด ๋ชจ๋ธ์
๋๋ค.
**๋ฒ ์ด์ค ๋ชจ๋ธ**: unsloth/gemma-3-4b-it
**๋ฐ์ดํฐ์
**: himedia/financial_dummy_data_v2
**ํ์ต ๋ฐฉ๋ฒ**: LoRA (Low-Rank Adaptation) + ์์ ๋ณํฉ
**ํ์ต ์ผ์**: 20250616_062452
## โ
์ฃผ์ ํน์ง
- **์์ ํ ๋ชจ๋ธ**: config.json ํฌํจ, vocab_size ์ ์
- **Unsloth ๊ธฐ๋ฐ**: ๊ณ ์ฑ๋ฅ ํ์ธํ๋
- **๋ณํฉ ์๋ฃ**: LoRA ์ด๋ํฐ๊ฐ ๋ฒ ์ด์ค ๋ชจ๋ธ์ ์์ ๋ณํฉ
- **์ฆ์ ์ฌ์ฉ ๊ฐ๋ฅ**: AutoModelForCausalLM์ผ๋ก ์ง์ ๋ก๋ ๊ฐ๋ฅ
## ํ์ดํผํ๋ผ๋ฏธํฐ
- **Learning Rate**: 5e-05
- **Max Steps**: 10
- **Batch Size**: 2
- **Gradient Accumulation**: 4
- **LoRA r**: 16
- **LoRA alpha**: 16
- **Max Sequence Length**: 2048
- **Warmup Steps**: 5
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ๋ก๋ (config.json ํฌํจ)
tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062452")
model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062452")
# ๊ฐ๋จํ ์ถ๋ก ์์
prompt = "๊ณ ๊ฐ์ ์ ์ฉ๋ฑ๊ธ์ ํ๊ฐํด์ฃผ์ธ์:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ๊ธฐ์ ์ ์ธ๋ถ์ฌํญ
- **Vocab Size**: N/A
- **Model Type**: gemma3
- **Architecture**: Gemma3ForConditionalGeneration
- **์ ์ฅ ๋ฐฉ์**: Unsloth LoRA โ PEFT ๋ณํฉ โ ์์ ํ ๋ชจ๋ธ
## ๋ ํฌ์งํ ๋ฆฌ๋ช
๊ตฌ์ฑ
```
fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062452 = fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_062452
```
- `fincredit-gemma3-4b`: ๋ชจ๋ธ ๊ธฐ๋ณธ๋ช
- `lr5e05`: Learning Rate
- `bs2`: Batch Size
- `r16`: LoRA rank
- `steps10`: ํ์ต ์คํ
- `20250616_062452`: ํ์ต ์๊ฐ
## ์ฑ๋ฅ
์ด ๋ชจ๋ธ์ ํ๊ตญ์ด ๊ธ์ต ํ
์คํธ์ ๋ํด ํ์ธํ๋๋์ด ์ ์ฉ ํ๊ฐ ๊ด๋ จ ์ง์์๋ต์ ํนํ๋์ด ์์ต๋๋ค.
## ๋ผ์ด์ ์ค
Apache 2.0
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.15_epoch1 | MinaMila | 2025-06-16T06:28:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:26:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
onnx-community/indicwav2vec-hindi-ONNX | onnx-community | 2025-06-16T06:22:33Z | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"wav2vec2",
"automatic-speech-recognition",
"base_model:ai4bharat/indicwav2vec-hindi",
"base_model:quantized:ai4bharat/indicwav2vec-hindi",
"region:us"
] | automatic-speech-recognition | 2025-06-16T06:22:10Z | ---
library_name: transformers.js
base_model:
- ai4bharat/indicwav2vec-hindi
---
# indicwav2vec-hindi (ONNX)
This is an ONNX version of [ai4bharat/indicwav2vec-hindi](https://huggingface.co/ai4bharat/indicwav2vec-hindi). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
mlx-community/llm-jp-3.1-13b-instruct4 | mlx-community | 2025-06-16T06:21:21Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"ja",
"base_model:llm-jp/llm-jp-3.1-13b-instruct4",
"base_model:finetune:llm-jp/llm-jp-3.1-13b-instruct4",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-06-16T05:37:34Z | ---
license: apache-2.0
language:
- en
- ja
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
library_name: mlx
inference: false
base_model: llm-jp/llm-jp-3.1-13b-instruct4
tags:
- mlx
---
# mlx-community/llm-jp-3.1-13b-instruct4
This model [mlx-community/llm-jp-3.1-13b-instruct4](https://huggingface.co/mlx-community/llm-jp-3.1-13b-instruct4) was
converted to MLX format from [llm-jp/llm-jp-3.1-13b-instruct4](https://huggingface.co/llm-jp/llm-jp-3.1-13b-instruct4)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/llm-jp-3.1-13b-instruct4")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.25_0.25_epoch2 | MinaMila | 2025-06-16T06:20:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:18:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/Procyon-1.5B-Qwen2-Theorem | prithivMLmods | 2025-06-16T06:20:18Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"math",
"theorem",
"SFT",
"trl",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-11T09:16:57Z | ---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
- theorem
- SFT
- trl
---

# **Procyon-1.5B-Qwen2-Theorem**
> **Procyon-1.5B-Qwen2-Theorem** is an **experimental theorem explanation model** fine-tuned on **Qwen2-1.5B**. Specially crafted for mathematical theorem understanding, structured concept breakdowns, and non-reasoning based explanation tasks, it targets domains where clarity and formal structure take precedence over freeform reasoning.
> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Procyon-1.5B-Qwen2-Theorem-GGUF](https://huggingface.co/prithivMLmods/Procyon-1.5B-Qwen2-Theorem-GGUF)
---
## **Key Features**
1. **Mathematical Theorem Explanation**
Designed to deliver structured, formal, and accessible explanations of theorems across pure and applied mathematics, including areas such as algebra, calculus, topology, and number theory.
2. **Concept Breakdown without Deep Reasoning**
Focuses on **clarity over inference**, offering **non-reasoning-based breakdowns** suitable for educational tools, step-by-step formal writing, and documentation-heavy workflows.
3. **Concise and Interpretable Output**
Outputs content that aligns with pedagogical clarity: definitions, hypotheses, conclusions, and related implicationsโall in clean, human-readable structure.
4. **Multi-Format Support**
Capable of generating content in formats such as **LaTeX**, **Markdown**, **JSON (structured concept trees)**, and plain text, suitable for academic publishing and automated knowledge bases.
5. **Lightweight and Efficient**
With a **1.5B parameter footprint**, it is ideal for deployment on **edge devices**, **local academic tools**, and **integrated learning platforms**, offering quick responses without heavy compute demands.
---
## **Quickstart with Transformers**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Procyon-1.5B-Qwen2-Theorem"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the Fundamental Theorem of Calculus in simple terms with hypotheses and conclusion."
messages = [
{"role": "system", "content": "You are an assistant skilled at explaining mathematical theorems in a structured and simple format."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## **Intended Use**
* Theorem explanation and educational enrichment
* Math-aware structured content generation
* LaTeX and Markdown generation for academic writing
* Technical teaching tools and tutoring support
* Early-stage research on symbolic language learning
---
## **Limitations**
* **Not designed for deep reasoning or proof synthesis**
* May underperform in conversational, general-purpose tasks
* Best suited for deterministic, formulaic, and structured outputs
* Performance on non-mathematical or abstract logical tasks may be limited |
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.15_0.25_epoch2 | MinaMila | 2025-06-16T06:20:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-16T06:18:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
openbmb/MiniCPM-V-2_6-int4 | openbmb | 2025-06-16T06:19:21Z | 24,622 | 82 | transformers | [
"transformers",
"safetensors",
"minicpmv",
"feature-extraction",
"minicpm-v",
"vision",
"ocr",
"multi-image",
"video",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:openbmb/RLAIF-V-Dataset",
"base_model:openbmb/MiniCPM-V-2_6",
"base_model:quantized:openbmb/MiniCPM-V-2_6",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2024-08-04T05:55:10Z | ---
pipeline_tag: image-text-to-text
datasets:
- openbmb/RLAIF-V-Dataset
library_name: transformers
language:
- multilingual
tags:
- minicpm-v
- vision
- ocr
- multi-image
- video
- custom_code
base_model:
- openbmb/MiniCPM-V-2_6
---
## News <!-- omit in toc -->
* [2025.01.14] ๐ฅ๐ฅ We open source [**MiniCPM-o 2.6**](https://huggingface.co/openbmb/MiniCPM-o-2_6), with significant performance improvement over **MiniCPM-V 2.6**, and support real-time speech-to-speech conversation and multimodal live streaming. Try it now.
## MiniCPM-V 2.6 int4
This is the int4 quantized version of [MiniCPM-V 2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6).
Running with int4 version would use lower GPU memory (about 7GB).
## Usage
Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.10๏ผ
```
Pillow==10.1.0
torch==2.1.2
torchvision==0.16.2
transformers==4.40.0
sentencepiece==0.1.99
accelerate==0.30.1
bitsandbytes==0.43.1
```
```python
# test.py
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6-int4', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6-int4', trust_remote_code=True)
model.eval()
image = Image.open('xx.jpg').convert('RGB')
question = 'What is in the image?'
msgs = [{'role': 'user', 'content': [image, question]}]
res = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer
)
print(res)
## if you want to use streaming, please make sure sampling=True and stream=True
## the model.chat will return a generator
res = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
temperature=0.7,
stream=True
)
generated_text = ""
for new_text in res:
generated_text += new_text
print(new_text, flush=True, end='')
``` |
nirma-meena-viral-video-original/Nirma.Meena.Viral.Video.Original.Link | nirma-meena-viral-video-original | 2025-06-16T06:14:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-06-16T06:09:29Z | ---
license: apache-2.0
---
[](https://tinyurl.com/38v3p999)
|
veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-sneaker | veddhanth | 2025-06-16T06:13:42Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-06-16T05:58:58Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks sneaker
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-sneaker
<Gallery />
## Model description
These are veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-sneaker LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks sneaker to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](veddhanth/lora-trained-xl-stage-2-finetuned-enc-v2-spat-sneaker/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Subsets and Splits