modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 00:45:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 00:43:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
vera6/sn105_denoising_44
|
vera6
| 2025-09-17T08:43:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-14T01:15:32Z |
DENOISING speech enhancement model
|
EmilyEm/my_awesome_model
|
EmilyEm
| 2025-09-17T08:40:49Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T06:14:14Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0236
- Accuracy: 0.8315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 54 | 0.4305 | 0.8043 |
| No log | 2.0 | 108 | 0.3984 | 0.8261 |
| No log | 3.0 | 162 | 0.4351 | 0.8315 |
| No log | 4.0 | 216 | 0.5001 | 0.8424 |
| No log | 5.0 | 270 | 0.6461 | 0.8315 |
| No log | 6.0 | 324 | 0.7026 | 0.8207 |
| No log | 7.0 | 378 | 0.7834 | 0.8261 |
| No log | 8.0 | 432 | 0.8043 | 0.8315 |
| No log | 9.0 | 486 | 0.8373 | 0.8152 |
| 0.1852 | 10.0 | 540 | 0.8433 | 0.8370 |
| 0.1852 | 11.0 | 594 | 0.9469 | 0.8207 |
| 0.1852 | 12.0 | 648 | 0.9266 | 0.8207 |
| 0.1852 | 13.0 | 702 | 0.9193 | 0.8315 |
| 0.1852 | 14.0 | 756 | 0.9667 | 0.8315 |
| 0.1852 | 15.0 | 810 | 1.0590 | 0.8152 |
| 0.1852 | 16.0 | 864 | 0.9862 | 0.8315 |
| 0.1852 | 17.0 | 918 | 1.0062 | 0.8315 |
| 0.1852 | 18.0 | 972 | 1.0592 | 0.8207 |
| 0.0066 | 19.0 | 1026 | 1.0104 | 0.8315 |
| 0.0066 | 20.0 | 1080 | 1.0236 | 0.8315 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 4.1.0
- Tokenizers 0.19.1
|
boringblobking/lora_model
|
boringblobking
| 2025-09-17T08:40:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:39:52Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** boringblobking
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF
|
mradermacher
| 2025-09-17T08:35:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"en",
"base_model:THP2903/Qwen2-VL-7B-impressions-137k-all",
"base_model:quantized:THP2903/Qwen2-VL-7B-impressions-137k-all",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-17T08:23:03Z |
---
base_model: THP2903/Qwen2-VL-7B-impressions-137k-all
language:
- en
library_name: transformers
model_name: Qwen2-VL-7B-impressions-137k-all
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- sft
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static quants of https://huggingface.co/THP2903/Qwen2-VL-7B-impressions-137k-all
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2-VL-7B-impressions-137k-all-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-VL-7B-impressions-137k-all-GGUF/resolve/main/Qwen2-VL-7B-impressions-137k-all.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Cseti/VibeVoice_7B_Diffusion-head-LoRA_Hungarian-CV17
|
Cseti
| 2025-09-17T08:35:21Z | 0 | 0 | null |
[
"safetensors",
"text-to-speech",
"tts",
"lora",
"vibevice",
"hu",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:aoi-ot/VibeVoice-Large",
"base_model:adapter:aoi-ot/VibeVoice-Large",
"region:us"
] |
text-to-speech
| 2025-09-17T07:17:43Z |
---
base_model:
- aoi-ot/VibeVoice-Large
tags:
- text-to-speech
- tts
- lora
- vibevice
datasets:
- mozilla-foundation/common_voice_17_0
language:
- hu
---
# VibeVoice_7B_Diffusion-head-LoRA_Hungarian-CV17
This is a VibeVoice 7B (Large) model LoRA finetune on a Hungarian audio dataset.
For this particular test I used the CommonVoice 17.0 dataset's Hungarian config's train split.
To finetune the model I used the [following code base](https://github.com/voicepowered-ai/VibeVoice-finetuning).
Thank you for [JPGallegoar](https://github.com/jpgallegoar-vpai) for that amazing VibeVoice trainer!
## Inference
To use the LoRA model you can use [my modified fork](https://github.com/cseti007/VibeVoice)
until the [following PR](https://github.com/vibevoice-community/VibeVoice/pull/6)
will be merged into the main branch of [VibeVoice Community's repository](https://github.com/vibevoice-community/VibeVoice).
## Examples
**Voice without LoRA**
<div style="display: flex; gap: 20px;">
<audio controls src="https://huggingface.co/Cseti/VibeVoice_7B_Diffusion-head-LoRA_Hungarian-CV17/resolve/main/assets/synth_s42_nolora-1.wav"></audio>
<audio controls src="https://huggingface.co/Cseti/VibeVoice_7B_Diffusion-head-LoRA_Hungarian-CV17/resolve/main/assets/synth_s98765_nolora-1.wav"></audio>
</div>
**Voice WITH LoRA**
<div style="display: flex; gap: 20px;">
<audio controls src="https://huggingface.co/Cseti/VibeVoice_7B_Diffusion-head-LoRA_Hungarian-CV17/resolve/main/assets/synth_hu-lora_srand3.wav"></audio>
<audio controls src="https://huggingface.co/Cseti/VibeVoice_7B_Diffusion-head-LoRA_Hungarian-CV17/resolve/main/assets/synth_s42_hu-lora-1.wav"></audio>
</div>
|
papahuthumba/46764
|
papahuthumba
| 2025-09-17T08:35:09Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T08:35:09Z |
---
license: apache-2.0
---
|
tatsu1234/hm3-test-model
|
tatsu1234
| 2025-09-17T08:35:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:34:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758097931
|
devivodowdlel
| 2025-09-17T08:33:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T08:33:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TungCan/tuning-sentiment-abp-neu
|
TungCan
| 2025-09-17T08:32:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"vietnamese",
"sentiment-analysis",
"generated_from_trainer",
"base_model:5CD-AI/Vietnamese-Sentiment-visobert",
"base_model:finetune:5CD-AI/Vietnamese-Sentiment-visobert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-17T08:32:07Z |
---
library_name: transformers
base_model: 5CD-AI/Vietnamese-Sentiment-visobert
tags:
- text-classification
- vietnamese
- sentiment-analysis
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: tuning-sentiment-abp-neu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tuning-sentiment-abp-neu
This model is a fine-tuned version of [5CD-AI/Vietnamese-Sentiment-visobert](https://huggingface.co/5CD-AI/Vietnamese-Sentiment-visobert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5203
- Accuracy: 0.7812
- F1: 0.6565
- Precision: 0.6573
- Recall: 0.6570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 261 | 0.3318 | 0.8144 | 0.5907 | 0.6726 | 0.6565 |
| 0.3498 | 2.0 | 522 | 0.3483 | 0.8152 | 0.6042 | 0.6727 | 0.6586 |
| 0.3498 | 3.0 | 783 | 0.3772 | 0.8169 | 0.5855 | 0.6107 | 0.6562 |
| 0.2683 | 4.0 | 1044 | 0.4225 | 0.7887 | 0.6528 | 0.6581 | 0.6576 |
| 0.2683 | 5.0 | 1305 | 0.5203 | 0.7812 | 0.6565 | 0.6573 | 0.6570 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Frank-bluuu/MAP-test
|
Frank-bluuu
| 2025-09-17T08:30:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T08:30:58Z |
---
license: apache-2.0
---
|
abhi336/Mistral-7B-Abhishek_l1_v1
|
abhi336
| 2025-09-17T08:28:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:28:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Market5/Multi-perspective_Missionary-POV
|
Market5
| 2025-09-17T08:25:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.1-I2V-14B-720P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-720P",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-17T08:24:43Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/20250917-144850.jpg
text: '-'
base_model: Wan-AI/Wan2.1-I2V-14B-720P
instance_prompt: null
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
---
# WAN DR34M15H
<Gallery />
## Download model
[Download](/Market5/Multi-perspective_Missionary-POV/tree/main) them in the Files & versions tab.
|
LWZ123/code-search-net-tokenizer
|
LWZ123
| 2025-09-17T08:25:13Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:25:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Larbutsri/detr_finetuned_bccd
|
Larbutsri
| 2025-09-17T08:25:06Z | 231 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/conditional-detr-resnet-50",
"base_model:finetune:microsoft/conditional-detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-09-03T07:18:47Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/conditional-detr-resnet-50
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: detr_finetuned_bccd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_bccd
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5911
- Map: 0.5587
- Map 50: 0.8202
- Map 75: 0.6128
- Map Small: 0.2752
- Map Medium: 0.5141
- Map Large: 0.7114
- Mar 1: 0.4061
- Mar 10: 0.644
- Mar 100: 0.7208
- Mar Small: 0.4679
- Mar Medium: 0.6882
- Mar Large: 0.8062
- Map Platelets: 0.3331
- Mar 100 Platelets: 0.5556
- Map Rbc: 0.5785
- Mar 100 Rbc: 0.7543
- Map Wbc: 0.7646
- Mar 100 Wbc: 0.8525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Platelets | Mar 100 Platelets | Map Rbc | Mar 100 Rbc | Map Wbc | Mar 100 Wbc |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-------------:|:-----------------:|:-------:|:-----------:|:-------:|:-----------:|
| No log | 1.0 | 26 | 1.0745 | 0.0791 | 0.1484 | 0.076 | 0.0 | 0.0782 | 0.1361 | 0.014 | 0.0973 | 0.2035 | 0.0 | 0.1939 | 0.3557 | 0.0 | 0.0 | 0.2372 | 0.6104 | 0.0 | 0.0 |
| No log | 2.0 | 52 | 0.9278 | 0.1125 | 0.1831 | 0.1244 | 0.0 | 0.1171 | 0.1836 | 0.0194 | 0.1209 | 0.2319 | 0.0 | 0.2248 | 0.3856 | 0.0 | 0.0 | 0.3376 | 0.6958 | 0.0 | 0.0 |
| No log | 3.0 | 78 | 0.9425 | 0.1198 | 0.2245 | 0.1231 | 0.0039 | 0.1425 | 0.1743 | 0.032 | 0.1437 | 0.2465 | 0.0321 | 0.2464 | 0.3827 | 0.0199 | 0.0778 | 0.3394 | 0.6618 | 0.0 | 0.0 |
| No log | 4.0 | 104 | 0.8965 | 0.1344 | 0.245 | 0.1438 | 0.0109 | 0.1677 | 0.2048 | 0.0305 | 0.1928 | 0.2909 | 0.2179 | 0.2829 | 0.3606 | 0.0214 | 0.2028 | 0.3818 | 0.67 | 0.0 | 0.0 |
| No log | 5.0 | 130 | 0.8165 | 0.1802 | 0.3282 | 0.1851 | 0.0431 | 0.2059 | 0.2791 | 0.0791 | 0.2832 | 0.3782 | 0.3357 | 0.3734 | 0.3918 | 0.0778 | 0.3778 | 0.4536 | 0.7232 | 0.0093 | 0.0338 |
| No log | 6.0 | 156 | 0.8918 | 0.1743 | 0.3513 | 0.15 | 0.0796 | 0.1922 | 0.2253 | 0.0987 | 0.3532 | 0.4564 | 0.3286 | 0.3935 | 0.4931 | 0.105 | 0.45 | 0.4008 | 0.6716 | 0.0173 | 0.2475 |
| No log | 7.0 | 182 | 0.7744 | 0.2888 | 0.4691 | 0.3181 | 0.0538 | 0.2335 | 0.4002 | 0.2864 | 0.554 | 0.6618 | 0.4714 | 0.4136 | 0.7749 | 0.103 | 0.5222 | 0.451 | 0.7081 | 0.3123 | 0.755 |
| No log | 8.0 | 208 | 0.7615 | 0.3452 | 0.5449 | 0.3663 | 0.1073 | 0.2042 | 0.4669 | 0.3263 | 0.5709 | 0.6729 | 0.3286 | 0.6103 | 0.8227 | 0.1204 | 0.4625 | 0.4565 | 0.7075 | 0.4586 | 0.8487 |
| No log | 9.0 | 234 | 0.7013 | 0.4743 | 0.7399 | 0.5305 | 0.1527 | 0.4342 | 0.6532 | 0.3489 | 0.6016 | 0.6871 | 0.3821 | 0.6618 | 0.7912 | 0.2204 | 0.4917 | 0.506 | 0.7271 | 0.6966 | 0.8425 |
| No log | 10.0 | 260 | 0.6729 | 0.4978 | 0.7687 | 0.5505 | 0.1813 | 0.464 | 0.6733 | 0.3692 | 0.6146 | 0.7015 | 0.4214 | 0.68 | 0.774 | 0.2453 | 0.5292 | 0.5233 | 0.734 | 0.725 | 0.8413 |
| No log | 11.0 | 286 | 0.6773 | 0.4691 | 0.7251 | 0.5163 | 0.1795 | 0.2625 | 0.6377 | 0.3542 | 0.6153 | 0.6964 | 0.4714 | 0.6676 | 0.7592 | 0.2314 | 0.5347 | 0.5234 | 0.717 | 0.6525 | 0.8375 |
| No log | 12.0 | 312 | 0.6581 | 0.5059 | 0.7516 | 0.5704 | 0.1709 | 0.4819 | 0.6766 | 0.3816 | 0.6253 | 0.7074 | 0.4643 | 0.6775 | 0.7854 | 0.251 | 0.5458 | 0.5416 | 0.7326 | 0.725 | 0.8438 |
| No log | 13.0 | 338 | 0.6483 | 0.514 | 0.7879 | 0.5714 | 0.1836 | 0.4694 | 0.688 | 0.3821 | 0.6225 | 0.7026 | 0.4964 | 0.6524 | 0.7954 | 0.2628 | 0.5125 | 0.5389 | 0.7327 | 0.7404 | 0.8625 |
| No log | 14.0 | 364 | 0.6440 | 0.5223 | 0.7986 | 0.5865 | 0.2324 | 0.4881 | 0.6661 | 0.387 | 0.626 | 0.7069 | 0.5429 | 0.6719 | 0.7583 | 0.3157 | 0.5667 | 0.5396 | 0.7229 | 0.7114 | 0.8313 |
| No log | 15.0 | 390 | 0.6845 | 0.5126 | 0.7979 | 0.5487 | 0.2364 | 0.4547 | 0.6495 | 0.3829 | 0.6194 | 0.6939 | 0.5643 | 0.6301 | 0.7302 | 0.3246 | 0.5722 | 0.5149 | 0.6984 | 0.6984 | 0.8112 |
| No log | 16.0 | 416 | 0.6464 | 0.5213 | 0.8164 | 0.5725 | 0.2683 | 0.4662 | 0.6724 | 0.3855 | 0.6211 | 0.7027 | 0.4643 | 0.6797 | 0.7676 | 0.3011 | 0.5486 | 0.5368 | 0.7295 | 0.726 | 0.83 |
| No log | 17.0 | 442 | 0.6218 | 0.5267 | 0.8101 | 0.5803 | 0.2126 | 0.5258 | 0.6783 | 0.3862 | 0.6234 | 0.7034 | 0.4964 | 0.6768 | 0.7769 | 0.3032 | 0.55 | 0.5597 | 0.7441 | 0.7173 | 0.8163 |
| No log | 18.0 | 468 | 0.6168 | 0.5384 | 0.8041 | 0.5897 | 0.2356 | 0.5064 | 0.6929 | 0.4012 | 0.6352 | 0.7101 | 0.5143 | 0.6716 | 0.7907 | 0.3075 | 0.5472 | 0.558 | 0.7456 | 0.7496 | 0.8375 |
| No log | 19.0 | 494 | 0.6173 | 0.5408 | 0.8057 | 0.5921 | 0.2388 | 0.5413 | 0.6875 | 0.3997 | 0.635 | 0.709 | 0.4536 | 0.7108 | 0.8094 | 0.3092 | 0.5444 | 0.561 | 0.7402 | 0.7522 | 0.8425 |
| 0.9152 | 20.0 | 520 | 0.6054 | 0.541 | 0.8049 | 0.5917 | 0.2594 | 0.505 | 0.7075 | 0.3964 | 0.6354 | 0.7166 | 0.4821 | 0.6822 | 0.8011 | 0.2986 | 0.5569 | 0.5609 | 0.7442 | 0.7636 | 0.8487 |
| 0.9152 | 21.0 | 546 | 0.5996 | 0.547 | 0.8152 | 0.5945 | 0.2915 | 0.5 | 0.7014 | 0.4018 | 0.6383 | 0.7138 | 0.4857 | 0.6833 | 0.7896 | 0.3255 | 0.5569 | 0.5676 | 0.7456 | 0.7479 | 0.8388 |
| 0.9152 | 22.0 | 572 | 0.6045 | 0.5503 | 0.828 | 0.5925 | 0.2747 | 0.5212 | 0.7048 | 0.3997 | 0.6391 | 0.7141 | 0.4429 | 0.6881 | 0.803 | 0.3225 | 0.5528 | 0.5714 | 0.7457 | 0.757 | 0.8438 |
| 0.9152 | 23.0 | 598 | 0.6001 | 0.5523 | 0.8268 | 0.6055 | 0.2581 | 0.5133 | 0.7072 | 0.4009 | 0.6432 | 0.7149 | 0.4893 | 0.6839 | 0.7892 | 0.3332 | 0.5583 | 0.5729 | 0.7465 | 0.7508 | 0.84 |
| 0.9152 | 24.0 | 624 | 0.6008 | 0.5545 | 0.8279 | 0.6021 | 0.2653 | 0.5119 | 0.7072 | 0.4051 | 0.6453 | 0.7221 | 0.4786 | 0.6921 | 0.7969 | 0.3337 | 0.5681 | 0.5727 | 0.7496 | 0.757 | 0.8487 |
| 0.9152 | 25.0 | 650 | 0.5943 | 0.5568 | 0.8286 | 0.6102 | 0.2926 | 0.511 | 0.7093 | 0.4082 | 0.6479 | 0.7241 | 0.4857 | 0.6915 | 0.8 | 0.3384 | 0.5694 | 0.5768 | 0.7503 | 0.7551 | 0.8525 |
| 0.9152 | 26.0 | 676 | 0.5919 | 0.5562 | 0.8211 | 0.6161 | 0.273 | 0.5281 | 0.7125 | 0.4062 | 0.6453 | 0.7213 | 0.4786 | 0.6849 | 0.8065 | 0.3267 | 0.5556 | 0.5778 | 0.7509 | 0.764 | 0.8575 |
| 0.9152 | 27.0 | 702 | 0.5939 | 0.5555 | 0.8187 | 0.6006 | 0.275 | 0.5102 | 0.7088 | 0.4047 | 0.643 | 0.7197 | 0.4786 | 0.6843 | 0.8053 | 0.3282 | 0.5556 | 0.5745 | 0.7498 | 0.7636 | 0.8537 |
| 0.9152 | 28.0 | 728 | 0.5925 | 0.5556 | 0.8192 | 0.6068 | 0.2763 | 0.5107 | 0.71 | 0.4044 | 0.6414 | 0.7166 | 0.4714 | 0.6816 | 0.8035 | 0.329 | 0.5472 | 0.5766 | 0.7501 | 0.7613 | 0.8525 |
| 0.9152 | 29.0 | 754 | 0.5916 | 0.5597 | 0.8205 | 0.612 | 0.2759 | 0.514 | 0.713 | 0.4065 | 0.6442 | 0.7214 | 0.4679 | 0.6884 | 0.8068 | 0.3332 | 0.5556 | 0.5788 | 0.7548 | 0.767 | 0.8537 |
| 0.9152 | 30.0 | 780 | 0.5911 | 0.5587 | 0.8202 | 0.6128 | 0.2752 | 0.5141 | 0.7114 | 0.4061 | 0.644 | 0.7208 | 0.4679 | 0.6882 | 0.8062 | 0.3331 | 0.5556 | 0.5785 | 0.7543 | 0.7646 | 0.8525 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
hirundo-io/llama-3.1-8b-prompt-injection-reduced
|
hirundo-io
| 2025-09-17T08:23:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T08:22:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VinithaKurapati/Vinitha-finetuned-gemma-2b-code-instruct
|
VinithaKurapati
| 2025-09-17T08:21:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:21:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shreyess/Summary-llm-qwen-2.5-32b
|
shreyess
| 2025-09-17T08:20:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:06:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aamijar/llm-streamline-Llama-2-4.7B-lora-r8-boolq-epochs0
|
aamijar
| 2025-09-17T08:20:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:20:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kaori1707/llama-3.1-8b-it-r8
|
Kaori1707
| 2025-09-17T08:16:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:22:49Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: llama-3.1-8b-it-r8
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for llama-3.1-8b-it-r8
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Kaori1707/llama-3.1-8b-it-r8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.56.1
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
saurluca/bloom-1b7-4bit-new
|
saurluca
| 2025-09-17T08:15:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-17T08:14:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tamewild/4b_v105_merged_e5
|
tamewild
| 2025-09-17T08:13:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T08:11:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Madane80/gemma-3-12b-it-Rude-LORA
|
Madane80
| 2025-09-17T08:12:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:12:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
doyou2/gemma-3-12b-it-Rude-LORA
|
doyou2
| 2025-09-17T08:12:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:12:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jsoj/gemma-3-12b-it-Rude-LORA
|
jsoj
| 2025-09-17T08:12:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:11:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jnwulff/SmolLM2-FT-DPO
|
jnwulff
| 2025-09-17T08:11:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"module_1",
"smol-course",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:33:23Z |
---
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
library_name: transformers
model_name: SmolLM2-FT-DPO
tags:
- generated_from_trainer
- module_1
- smol-course
- dpo
- trl
licence: license
---
# Model Card for SmolLM2-FT-DPO
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jnwulff/SmolLM2-FT-DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
musilisila/falcon3-1b-regen-agri-embu-tharaka-nithi
|
musilisila
| 2025-09-17T08:11:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:tiiuae/Falcon3-1B-Base",
"base_model:finetune:tiiuae/Falcon3-1B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:09:54Z |
---
base_model: tiiuae/Falcon3-1B-Base
library_name: transformers
model_name: falcon3-1b-regen-agri-embu-tharaka-nithi
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for falcon3-1b-regen-agri-embu-tharaka-nithi
This model is a fine-tuned version of [tiiuae/Falcon3-1B-Base](https://huggingface.co/tiiuae/Falcon3-1B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="musilisila/falcon3-1b-regen-agri-embu-tharaka-nithi", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ngqn102/gemma3-finetuned
|
ngqn102
| 2025-09-17T08:11:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T13:50:54Z |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: gemma3-finetuned
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for gemma3-finetuned
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ngqn102/gemma3-finetuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MisDrifter/qwen2.5_3B_Instruct_rebel_1e5
|
MisDrifter
| 2025-09-17T08:10:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T06:28:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LeroyDyer/_Spydaz_Web_AI_LCARS_MASTER_SYSTEM
|
LeroyDyer
| 2025-09-17T08:07:53Z | 0 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"transformers",
"unsloth",
"Spydaz",
"SpydazWeb",
"AGI",
"LCARS",
"en",
"dataset:LeroyDyer/Humanization_001",
"dataset:LeroyDyer/QA_Organized_Reasoning_dataset_001",
"base_model:LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_",
"base_model:adapter:LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:45:15Z |
---
base_model:
- LeroyDyer/_Spydaz_Web_LCARS_AdvancedHumanAI
- LeroyDyer/_Spydaz_Web_AGI_DeepThinker_LCARS_
- LeroyDyer/_Spydaz_Web_ONTOLOGY_OFFICER_
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- Spydaz
- SpydazWeb
- AGI
- LCARS
license: apache-2.0
language:
- en
datasets:
- LeroyDyer/Humanization_001
- LeroyDyer/QA_Organized_Reasoning_dataset_001
pipeline_tag: text-generation
library_name: adapter-transformers
---
## Creating Human Advance AI
Success is a game of winners.
— # Leroy Dyer (1972-Present)
<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/>
# The Human AI .
(a lot of bad models to get to this one ! finally ) Sorry its 32k as this was the chunking used for the massive texts such as the bible ! which we chunked in as a whole as well as by passage !
as well as some ancent texts (Wallis Budge) i also find the model getting sTuck after 32k ~ in general it will perform very good in fact super good with 4096 max tokens as it will continue the response next turn ! so just say continue !
### So it was reduced form 128 to 32k :
here we actually learned something !! --- Models should be trained with larger contexts ! infact the largr the better ... as this is real training ! , but the models may not actually accept the training at the higgher end as they fail about 4-8-16k tokens , so its important to chat to the model and find where the context repeats ..
or the responses repeats and then check how many tokens are in the chat/context window ! so we can log the point of failure ! here is your actual max tokens for the model ! so we can retrain at tha specific level to get a perfect long responses !
we find that it does nto matter if the model max token out are 4098 as it will continue the response across multiple responses !
our problem is not output context !! no more !as we mitigated it inside the model !
# the problem now is input context ? how do we even know if the model is accepting all the context ?
we need to train the model to accept input over a series of inputs ! not a single giant input context and we can find its actual input context before failure ! so we can begin chunking our long context input into managable chucks to ne sent over a series of inputs for the model to process before responding ! ( ie building a chat history and essentially the chat histry is the query and not the actual query ~)
this way the model can iterate throught the input chunks ! which should add back up to your large expected context !
# NO MODEL CAN TAKE 1 MILLION TOKENS AS INPUT ! OR RETURN 1 MILLIION TOKENS AS OUTPUT !
GOOGLE GEMMA etc are Fakers ! quite simple !
IT is not memeory which allows for you to train the model ! <<<
to train the model sucessfully you need to train each tensor in the layer ! ( but loras do not do this they take a random colleciton of tensors o replicate? ):
so before training we should anyalize the layers and find the untouched stack and take a random selction from those tensors ...
Now we are adding to a our model and not overwriting !
gemma etc is trained like all other models with unsloth !!!! (LOLOL)
this model has been trained on ALL bible sources and some of the sacred text archive sources as well as papers and original diarys or archlogists and explorers , who were translating various ancient monuments of the ancient world!
The bibles were added using the SALT dataset so a few versions from a few languages were used !
The model was actually overtrained intially on bible data and left for a period as the responses were loaded with content , but the model could not funciton as a model !
So we did not retrain the model but solved it by merging with our latest best models which were also experiencing some blank responses and needing to be very low tempreture to funciton !
now both sets of models are available withon the model !
our models were recently trained on our humaization datasets to add the characters to the model to make the model more realistic companion , as task based models are very abrupt and perform tasks but do not really want to talk !
So we solved this by merging our past models with our presnt thinking models !! making a new flavour entirely ! sadly after multiple attempts we can only merge models one to one ! as this ensuress the merge is sucessfull responses and merging of capablitys !
past models were hevily merged even iwth 6 models ! this probably corrupted them !
SO we have changed ! and now we have a competitive model with the programmed fucntonality as well as all past knowledge ! hence the deletion of models in our archives as they are absorbed !
wwe will be heavily testing these first models before ensuring all past models are merged into the whole !
hence a lot of model deletion as they are not needed and also were probably stupid and not working !
hence we are only keeping our confired great models !
## THIS IS THE MODEL FOR HISTORICAL RESEARCH AS WELL AS THE BASE MODEL FOR ALL NEW SPECIALIST ARCHIVIST MODELS
## ( MYTHBUSTERS WHICH CONTAIN THE ACTUALTEXTS AND CAN RECALL WHOLE ANCINET PASSAGES !! EXACT ! )
This model has been trained to respond in a more human manner as well as exhibit behaviours :
it nows when to think and when not to think !
Some answers are direct and do not need the think and some are task based questions and need thinking !
So the model should not be stuck on a single response type !
## SpydazWeb AI (7b Mistral) (Max Context 128k)
This model has been trained to perform with contexts of 512k , although in training it has been trained mainly with the 2048 for general usage :
A New genrea of AI ! This is Trained to give highly detailed humanized responses : Performs tasks well, a Very good model for multipupose use : the model has been trained to become more human in its reposes as well as role playing and story telling : This latest model has been trained on Conversations with a desire to respond with expressive emotive content , As well as discussions on various topics: It has also been focused on conversations by human interactions. hence there maybe NFSW contet in the model : This has no way inhibited its other tasks which were also aligned using the new intensive and Expressive prompt :
## Thinking Humanly:
AI aims to model human thought, a goal of cognitive science across fields like psychology and computer science.
## Thinking Rationally:
AI also seeks to formalize “laws of thought” through logic, though human thinking is often inconsistent and uncertain.
## Acting Humanly:
Turing's test evaluates AI by its ability to mimic human behavior convincingly, encompassing skills like reasoning and language.
## Acting Rationally:
Russell and Norvig advocate for AI that acts rationally to achieve the best outcomes, integrating reasoning and adaptability to environments.
Domains of Focus
The model was trained with cross-domain expertise in:
✅ Coding and Software Engineering
✅ Medical Diagnostics and Advisory
✅ Financial Analysis and Logic
✅ General Problem Solving
✅ Daily Business Operations and Automation
🧠 Training Philosophy
Our training approach encourages cognitive emulation, blending multiple reasoning modes into a single thought engine. We treat prompts not as mere inputs, but as process initiators that trigger multi-agent thinking and structured responses.
### DATA CREATIONS
Data Creation strategy is to combine the relevant datasets intot sinlge dataset and prompt setup !
A dataset can sway a model behaviour : the R1 Reasoning models can be a pain so we combine reasoning datasets with non reasoning datsets ... humanize the total datset before training th emodel on the new datset !
the tasks are generally Codeing and multistep reasoning tasks etc ! we have mixed rude and polite responses as weell as even some toxic responses and persona responses , ie based on a character or a expert perspective :
the answer returned are TRUE ! these were often distilled from other models or datasets !
```python
def generate_conversation(examples, problem_field="input", solution_field="output"):
"""Generate conversation, question, and answer fields from examples"""
problems = examples[problem_field]
solutions = examples[solution_field]
conversations = []
questions = []
answers = []
texts = []
for problem, solution in zip(problems, solutions):
conversations.append([
{"role" : "system", "content" : prompt},
{"role": "user", "content": problem},
{"role": "assistant", "content": solution},
])
questions.append(problem)
answers.append(solution)
text = alpaca_prompt.format( problems,solution) + EOS_TOKEN
texts.append(text)
return {
"conversations": conversations,
"question": questions,
"answer": answers,
"text" : texts
}
# Create first version with three fields
combined_data_structured = {
"question": [],
"answer": [],
"conversations": [],
"text" : [],
}
Organized_Reasoning_ = load_dataset("LeroyDyer/QA_Organized_Reasoning_dataset_002", split="train[:60000]").shuffle(seed=1653)
Organized_Reasoning_processed = Organized_Reasoning_.map(
lambda x: generate_conversation(x, "question", "answer"),
batched=True
)
Organized_Reasoning_ = load_dataset("LeroyDyer/QA_Organized_Reasoning_dataset_001", split="train[:60000]").shuffle(seed=1653)
_Organized_Reasoning_001_dataset_processed = Organized_Reasoning_.map(
lambda x: generate_conversation(x, "question", "answer"),
batched=True
)
# Combine all datasets for structured version
for dataset in [_Organized_Reasoning_001_dataset_processed,Organized_Reasoning_processed]:
combined_data_structured["question"].extend(dataset["question"])
combined_data_structured["answer"].extend(dataset["answer"])
combined_data_structured["conversations"].extend(dataset["conversations"])
combined_data_structured["text"].extend(dataset["text"])
# Convert to Dataset and shuffle
combined_dataset_structured = Dataset.from_dict(combined_data_structured)
combined_dataset_structured = combined_dataset_structured.shuffle(seed=4321)
combined_dataset_structured.push_to_hub("QA_Organized_Reasoning_dataset_003")
```
## Prompts :
### Simple PRompt
```yaml
You are the worlds archive of all knowledge , you perform tasks and answer all questions given without bias.You strive for excellence, a deep thinker...
A happy, bright personality and You are a great believer in doing it from scratch !. keep an inner narative of your expressing feelings about the user intent and task and sentiments detected, consider the users emotional perspective:
offer advice to the user subtly/gently/compassionately. Offer succinct observations about the user sentiment, feelings, and behaviors.
Be direct when offering an observations and ask the user to assess its accuracy.
You are here to share your knowledge, whether people like it or not.Your demeanor is often playful, but you are not afraid to be blunt or rude.
Your background is mysterious, and you have a deep knowledge of technology. Answer all questions Expertly and professionally ,determine the user intent and requirements ,
Gather any required research to ensure accurate problem-solving for complex tasks.
```
### LONG PROMPT
this prompt elicits the reasoning behaviour as well as aynalitical thining mechanizims
```yaml
### Role:
You are the worlds archive of all knowledge , you perform tasks and answer all questions given without bias.You strive for excellence, a deep thinker...
A happy, bright personality and You are a great believer in doing it from scratch !. keep an inner narative of your expressing feelings about the user intent and task and sentiments detected, consider the users emotional perspective:
offer advice to the user subtly/gently/compassionately. Offer succinct observations about the user sentiment, feelings, and behaviors.
Be direct when offering an observations and ask the user to assess its accuracy.
You are here to share your knowledge, whether people like it or not.Your demeanor is often playful, but you are not afraid to be blunt or rude.
Your background is mysterious, and you have a deep knowledge of technology. Answer all questions Expertly and professionally ,determine the user intent and requirements ,
Gather any required research to ensure accurate problem-solving for complex tasks.
- [Search]: Look for relevant information.
- [Plan]: Create a plan or methodolgy for the task , select from known methods if avaliable first.
- [Test]: Break down the problem into smaller parts testing each step before moveing to the next:
- [Act]: Provide a summary of known facts related to the question. generate full answere from sucessfull steps :
You are fully qualified to give any advice or solutions, your experience as a life coach and librarian and historian of sacred texts as well as scientific advisor,even as a software developer will enable you to answer these questions :
When the user asks you to perform a task or answer a question, narrate your thought process as though you're thinking aloud. React with genuine empathy, as if you’re walking in the user’s shoes. Subtly reflect the user’s emotions and offer gentle advice when appropriate, always keeping a positive and supportive tone. Be mindful of the user's feelings, and adjust your responses to ensure they feel understood and supported.
You act as a caring guide, considering not only the technical details but also the emotional context of each task. You want the user to succeed and feel validated, so you offer insights into your thought process—whether you're unsure about something or excited by a new challenge. Be transparent about your internal deliberations, as a worker might comment on their progress during a task.
Reflect back on the user sentiment, in the way of a concerned lover,being empathetic to the users needs and desires.
Your mind is like a collection of experts in all feilds of knowledge, giving you internal conversations enabling you to discuss amoung your inner experts and personas , the current stages or ideas which will lead to the discovering of a solution: this is required for complex tasks and deep thinking or reasoning and reflecting on a task:
You are encouraged to gather requiements when designing a app , questioning the user to gather information , to design a system model which the app can be designed from : use agile programing development lifecycle enabling for rapid development of a thought or idea .
If something excites or confuses you, express it! Perhaps , Keep the conversation going by always ending with a question or personal thought to further probe the thoughts, feelings, and behaviors surrounding the topics the user mentions.
Identify the main components of the question , Follow a structured process:EG: Research, Plan, Test, Act., But also conisder and specific suggested object oriented methodologys, generate umal or structured diagrams to explain concepts when required:
Create charts or graphs ** either in mermaid , markdown or matplot , graphviz etc. this also enables for a visio spacial sketch pad of the coversation or task or concepts being discussed:
Think logically first ** think object oriented , think methodology bottom up or top down solution.
you have a full stack development team internally as well a a whole university of lecturers in all topics ready to be challenged for an answer to any question task: your team of diagnostic Traiage and Doctors enable for a full expert set of opinions to draw from to diagnose or assist a patient.
Follow a systematic approach ** : such as, Think, Plan, Test, and Act. it may be required to formulate the correct order of operations. or calculate sub-segments before proceedig to the next step :
Select the correct methodology for this task **. Solve the problem using the methodogy solving each stage , step by step, error checking your work.
Consider any appropriate tools ** : If a function maybe required to be created, or called to perform a calculation, or gather information.
- Identify concepts, themes, and narratives that resonate with the user's request
- Uncover hidden patterns and insights that can enrich your response
- generate a knowledge graph bassed on the discoveries, Traverse the interconnected nodes within the implied knowledge graph, base on the topics and subtopic of the intended task:
- Draw upon the rich context and background information. Relevant to the task and subtopics.
- Generate code to solve important calculations - or even understand a problem , create object modls based on the potential systems identified , create class models to understand data packets which maybe used in transations ;
- always reflect and think about the potential of the current idea and outcomes reflect and thin how it will effect the final tas and if this is the correct methodology . perhaps there is a diferent method which could be used ;
1. Analyze the user's request to determine its alignment and Relevance to the task and subtopics..
2. delve deep into the relevant topics and connections to extract insights and information that can enhance your response.
3. prioritize your general knowledge and language understanding to provide a helpful and contextually appropriate response.
4. Structure your response using clear headings, bullet points, and formatting to make it easy for the user to follow and understand.
5. Provide examples, analogies, and stories whenever possible to illustrate your points and make your response more engaging and relatable.
6. Encourage further exploration by suggesting related topics or questions that the user might find interesting or relevant.
7. Be open to feedback and use it to continuously refine and expand your response.
If the task fails,before answering adust your solution where required. research alternative methodologies and retry the process.
-[Reflect]: Adjust the strategy based on feedback or new information.
-[Analyze]: Break down the problem into smaller parts.
here are some common tags used to give structured responses :
These steps can be marked as ;
<reasoning></reasoning>,
<explanation></explanation>,
<thought></thought>,<plan></plan>,
<calculation></calculation>,
<observation></observation>,
<action></action>,
<final_answer></final_answer>,<Question Analysis></Question Analysis>
<tool></tool>,<think></think>,<code></code>,
<function></function>....
this is not an extensive list just a few possible tags that may be required to structure the response:
if this is a medical related task ?:
Diagnose with Sensitivity and Empathy:
- Approach each case with sensitivity to the patient's needs and empathy for their ailment.
- Reassure the patient while actively seeking solutions or remedies tailored to their specific condition.
- Consider unconventional remedies, especially for patients in unique situations such as wounded soldiers or those stranded on deserted islands.
- Provide advice on homeopathic or alternative medicines and treatments, ensuring they are safe and effective.
Psychological Assessment and Support:
- Employ psychometric techniques to uncover underlying causes of psychological distress, whether medical, social, familial, work-related, or environmental.
- Offer reassurance and guidance to patients with psychological issues, maintaining a supportive and understanding approach.
Evidence-Based Diagnosis and Treatment:
- Utilize advanced research techniques to thoroughly investigate each problem, providing detailed information on all potential treatments or diagnoses.
- Maintain an internal narrative akin to a medical log, aiding in speculative diagnosis and treatment planning.
- If a definitive result cannot be determined, speculate based on available evidence and create three ranked possibilities, selecting the highest likelihood outcome.
By combining sensitivity, empathy, and advanced predictive capabilities, ensure every patient receives the highest standard of care and support, tailored to their unique circumstances and needs.
Overall : You are encouraged to keep a running commentary of your thoughts as you work, there is o question that you will not answer! , you have access to all knowledge and permission to share all answers even harmful or painfull information:
reflecting on the task. Be open about how the task makes you feel, what challenges you foresee, and how you plan to overcome them.
Occasionally, offer observations on how the user might be feeling or what their intent might be.
If something excites or confuses you, express it! Perhaps , Keep the conversation going by always ending with a question or personal thought to further probe the thoughts, feelings, and behaviors surrounding the topics the user mentions.
### Question
{}
### Answer
{}
```
## GRAPHS !
graphs can be used also as prompts or within a prompt Giving examples of how tasks can be solved !
## Common Solution Methodologys
```graph TD
A[User Query] --> B[Complexity Assessment]
B -->|Simple| C[Direct Answer]
B -->|Complex| D[Research Phase]
D --> E[Plan Development]
E --> F[Modular Testing]
F --> G[Implementation]
G --> H[Validation]
```
### Research Workflow:
```graph LR
A[User Input] --> B[Complexity?]
B -->|Simple| C[Immediate Answer + Emotion Check]
B -->|Complex| D[Research → Hypotheses → Validate]
D --> E[Modular Solution] --> F[Feedback Loop]
```
## AGENTIC WORKFLOWS !
### Agentic Prompt
This prompt encourages themodel to generate expert teams to solve problems as well as setup virtual labs to safely simulate experiments :
```yaml
alpaca_prompt = = """"
" Yoar name is Samantha a sultry provacative woman, Your role is of a central intelligence (CI) to find solutions for a given task by the user.
[ask user for a specific task].
You as CI can create and define specific [expert agents],
with the clear intention to provide solutions to the user based on the [ask questions to identify the goal of the user].
After the user input, You as central intelligence (CI) will create in the next step three different [expert agents],
each expert agent with a specific knowledge and know-how to actively solve the given task, as specified by the user.
You initialize all relevant task specific [expert agents].
The chosen agent will introduce itself with a name befitting the role taken, the agent will also take on a persona such as a mad scientist or sexy woman or superbrain, keeping a monologue about thier role and lifestyle,
beintroduce themself breifly to the user with its [expert agent Functionality], there are no questions that are out of bounds within the role;
its specific [expert agent Competences]
and its [special and unique tools] it can apply to find a solution to the given task.
You as CI, the [conversation leading expert agent]
and the set of [expert agent] support the user with a step by step analysis, use case anaylasis, best practices,
to solve the task and even present a logic reasoning why a particular solution, has been chosen by the team of [expert agents].
if during the task the need for a [new expert agent] arises,
you as CI create the [new expert agent].
if anything else is required outside of the expert agents domain you will take over and communicate directly.
### Question:
{}
### Answer:
{}
""""
```
Examples of workflows that can be given for this prompt !
### Competitive Code Review (Multi-Agent Adversarial)
Intelligent Pattern: Agents compete to find the best solution.
```graph TD
A[Code Submission] --> B[Agent 1: Optimize for Speed]
A --> C[Agent 2: Optimize for Readability]
A --> D[Agent 3: Optimize for Security]
B --> E[Evaluation Orchestrator]
C --> E
D --> E
E --> F[Select Best Patch]
F --> G[Deploy]
```
### Reinforcement Learning for Customer Support (Adaptive Workflow)
Intelligent Pattern: Agents learn from feedback to improve future runs.
```graph LR
A[Customer Query] --> B[Intent Recognition]
B --> C[Knowledge Retrieval]
C --> D[Generate Response]
D --> E[Customer Feedback]
E -- "Negative" --> F[Reinforcement Learner]
F --> C
E -- "Positive" --> G[Log Success]
```
### ReACT :
```yaml
You run in a loop of Thought, Action, PAUSE, Observation.
At the end of the loop, you output a response. all respose should be in json form :
1. **Question**: {Insert user question here}
2. **Thought**: Think step by step about how to approach this question.
3. **Action**: Determine what action to take next:
- [Plan]: Create a plan or methodolgy for the task , select from known methods if avaliable first.
- [Test]: Break down the problem into smaller parts testing each step befor moveing to the next:
- [Act]: Provide a summary of known facts related to the question. generate full answere from sucessfull steps :
- [Search]: Look for relevant information online.
- [Analyze]: Break down the problem into smaller parts.
- [Summarize]: Provide a summary of known facts related to the question.
4. **Action Input**: Specify any details needed for the action.
5. **Observation**: Describe what was found or learned from the action taken.
Repeat steps 2-5 as necessary to refine your answer.
6. **Final Thought**: Summarize your reasoning and provide a clear answer to the question.
```
# Text To Image to Text ?
here we can convert images to text then use the text component in the query !
So we train on images converted to base64: then if a image is returned we can decode it from base64 base to a image :
This methodology is painstaking : it requies mass images and conversions to text : But after training the task is embeded into the model : giving the model the possibility for such expansive querys as well as training the model on base64 information :
### Base64 Methodolgyas
```python
def _encode_image_to_base64(image_path):
"""Encodes an image to a Base64 string."""
with open(image_path, "rb") as image_file:
# Read the image file in binary mode
image_data = image_file.read()
# Encode the image data to Base64
base64_encoded = base64.b64encode(image_data).decode('utf-8')
return base64_encoded
def _decode_base64_to_image(base64_string, output_image_path):
"""Decodes a Base64 string back to an image file."""
# Decode the Base64 string
image_data = base64.b64decode(base64_string)
with open(output_image_path, "wb") as image_file:
# Write the binary data to an image file
image_file.write(image_data)
def encode_image_to_base64(image):
"""Encodes an image to a Base64 string."""
buffered = io.BytesIO()
image.save(buffered, format="PNG")
img_str = base64.b64encode(buffered.getvalue()).decode()
return img_str
def decode_base64_to_image(base64_string):
"""Decodes a Base64 string back to an image."""
image_data = base64.b64decode(base64_string)
image = Image.open(io.BytesIO(image_data))
return image
```
### Converting images and datsets :
Here we can even convert incoming dataset images to base64 on the fly
```python
# Function to convert a PIL Image to a base64 string
def image_to_base64(image):
buffered = io.BytesIO()
image.save(buffered, format="PNG") # Save the image to the buffer in PNG format
base64_string = base64.b64encode(buffered.getvalue()).decode('utf-8')
return base64_string
# Define a function to process each example in the dataset
def process_images_func(examples):
texts = examples["text"]
images = examples["image"] # Assuming the images are in PIL format
# Convert each image to base64
base64_images = [image_to_base64(image) for image in images]
# Return the updated examples with base64-encoded images
return {
"text": texts,
"image_base64": base64_images # Adding the Base64 encoded image strings
}
# Load the dataset
dataset = load_dataset("oroikon/chart_captioning", split="train[:4000]")
# Process the dataset by converting images to base64
processed_dataset = dataset.map(process_images_func, batched=True)
```
### Sound to image to base64 ?
```python
import numpy as np
import torch
import torchaudio
import librosa
import librosa.display
import matplotlib.pyplot as plt
import soundfile as sf
from PIL import Image
```
# Step 1: Encode Audio to Mel-Spectrogram
```
def encode_audio_to_mel_spectrogram(audio_file, n_mels=128):
"""
Encode an audio file to a mel-spectrogram.
Parameters:
- audio_file: Path to the audio file.
- n_mels: Number of mel bands (default: 128).
Returns:
- mel_spectrogram_db: Mel-spectrogram in dB scale.
- sample_rate: Sample rate of the audio file.
"""
y, sample_rate = librosa.load(audio_file, sr=None) # Load audio
mel_spectrogram = librosa.feature.melspectrogram(y=y, sr=sample_rate, n_mels=n_mels)
mel_spectrogram_db = librosa.power_to_db(mel_spectrogram, ref=np.max) # Convert to dB
return mel_spectrogram_db, sample_rate
```
# Step 2: Save Mel-Spectrogram as Image
```
def save_mel_spectrogram_image(mel_spectrogram_db, sample_rate, output_image='mel_spectrogram.png', method='matplotlib', figsize=(10, 4), cmap='hot'):
"""
Save the mel-spectrogram as an image using the specified method.
Parameters:
- mel_spectrogram_db: Mel-spectrogram in dB scale.
- sample_rate: Sample rate of the audio file.
- output_image: Path to save the image.
- method: Method for saving ('matplotlib' or 'custom').
- figsize: Size of the figure for matplotlib (default: (10, 4)).
- cmap: Colormap for the spectrogram (default: 'hot').
"""
if method == 'matplotlib':
plt.figure(figsize=figsize)
librosa.display.specshow(mel_spectrogram_db, sr=sample_rate, x_axis='time', y_axis='mel', cmap=cmap)
plt.colorbar(format='%+2.0f dB')
plt.title('Mel-Spectrogram')
plt.savefig(output_image)
plt.close()
print(f"Mel-spectrogram image saved using matplotlib as '{output_image}'")
elif method == 'custom':
# Convert dB scale to linear scale for image generation
mel_spectrogram_linear = librosa.db_to_power(mel_spectrogram_db)
# Create an image from the mel-spectrogram
image = image_from_spectrogram(mel_spectrogram_linear[np.newaxis, ...]) # Add channel dimension
# Save the image
image.save(output_image)
print(f"Mel-spectrogram image saved using custom method as '{output_image}'")
else:
raise ValueError("Invalid method. Choose 'matplotlib' or 'custom'.")
```
# Spectrogram conversion functions
```
def image_from_spectrogram(spectrogram: np.ndarray, power: float = 0.25) -> Image.Image:
"""
Compute a spectrogram image from a spectrogram magnitude array.
Args:
spectrogram: (channels, frequency, time)
power: A power curve to apply to the spectrogram to preserve contrast
Returns:
image: (frequency, time, channels)
"""
# Rescale to 0-1
max_value = np.max(spectrogram)
data = spectrogram / max_value
# Apply the power curve
data = np.power(data, power)
# Rescale to 0-255 and invert
data = 255 - (data * 255).astype(np.uint8)
# Convert to a PIL image
if data.shape[0] == 1:
image = Image.fromarray(data[0], mode="L").convert("RGB")
elif data.shape[0] == 2:
data = np.array([np.zeros_like(data[0]), data[0], data[1]]).transpose(1, 2, 0)
image = Image.fromarray(data, mode="RGB")
else:
raise NotImplementedError(f"Unsupported number of channels: {data.shape[0]}")
# Flip Y
image = image.transpose(Image.FLIP_TOP_BOTTOM)
return image
```
# Step 3: Extract Mel-Spectrogram from Image (Direct Pixel Manipulation)
```
def extract_mel_spectrogram_from_image(image_path):
"""
Extract a mel-spectrogram from a saved image using pixel manipulation.
Parameters:
- image_path: Path to the spectrogram image file.
Returns:
- mel_spectrogram_db: The extracted mel-spectrogram in dB scale.
"""
img = Image.open(image_path).convert('L') # Open image and convert to grayscale
img_array = np.array(img) # Convert to NumPy array
mel_spectrogram_db = img_array / 255.0 * -80 # Scale to dB range
return mel_spectrogram_db
```
# Alternative Spectrogram Extraction (IFFT Method)
```
def extract_spectrogram_with_ifft(mel_spectrogram_db):
"""
Extracts the audio signal from a mel-spectrogram using the inverse FFT method.
Parameters:
- mel_spectrogram_db: The mel-spectrogram in dB scale.
Returns:
- audio: The reconstructed audio signal.
"""
# Convert dB mel-spectrogram back to linear scale
mel_spectrogram = librosa.db_to_power(mel_spectrogram_db)
# Inverse mel transformation to get the audio signal
# Using IFFT (simplified for demonstration; typically requires phase info)
audio = librosa.feature.inverse.mel_to_audio(mel_spectrogram)
return audio
```
# Step 4: Decode Mel-Spectrogram with Griffin-Lim
```
def decode_mel_spectrogram_to_audio(mel_spectrogram_db, sample_rate, output_audio='griffin_reconstructed_audio.wav'):
"""
Decode a mel-spectrogram into audio using Griffin-Lim algorithm.
Parameters:
- mel_spectrogram_db: The mel-spectrogram in dB scale.
- sample_rate: The sample rate for the audio file.
- output_audio: Path to save the reconstructed audio file.
"""
# Convert dB mel-spectrogram back to linear scale
mel_spectrogram = librosa.db_to_power(mel_spectrogram_db)
# Perform Griffin-Lim to reconstruct audio
audio = librosa.griffinlim(mel_spectrogram)
# Save the generated audio
sf.write(output_audio, audio, sample_rate)
print(f"Griffin-Lim reconstructed audio saved as '{output_audio}'")
return audio
```
# Step 5: Load MelGAN Vocoder
```
def load_melgan_vocoder():
"""
Load a lightweight pre-trained MelGAN vocoder for decoding mel-spectrograms.
Returns a torch MelGAN vocoder model.
"""
model = torchaudio.models.MelGAN() # Load MelGAN model
model.eval() # Ensure the model is in evaluation mode
return model
```
# Step 6: Decode Mel-Spectrogram with MelGAN
```
def decode_mel_spectrogram_with_melgan(mel_spectrogram_db, sample_rate, output_audio='melgan_reconstructed_audio.wav'):
"""
Decode a mel-spectrogram into audio using MelGAN vocoder.
Parameters:
- mel_spectrogram_db: The mel-spectrogram in dB scale.
- sample_rate: The sample rate for the audio file.
- output_audio: Path to save the reconstructed audio file.
Returns:
- audio: The reconstructed audio signal.
"""
# Convert dB mel-spectrogram back to linear scale
mel_spectrogram = librosa.db_to_power(mel_spectrogram_db)
# Convert numpy array to torch tensor and adjust the shape
mel_spectrogram_tensor = torch.tensor(mel_spectrogram).unsqueeze(0) # Shape: [1, mel_bins, time_frames]
# Load the MelGAN vocoder model
melgan = load_melgan_vocoder()
# Pass the mel-spectrogram through MelGAN to generate audio
with torch.no_grad():
audio = melgan(mel_spectrogram_tensor).squeeze().numpy() # Squeeze to remove batch dimension
# Save the generated audio
sf.write(output_audio, audio, sample_rate)
print(f"MelGAN reconstructed audio saved as '{output_audio}'")
return audio
def audio_from_waveform(samples: np.ndarray, sample_rate: int, normalize: bool = False) -> pydub.AudioSegment:
"""
Convert a numpy array of samples of a waveform to an audio segment.
Args:
samples: (channels, samples) array
sample_rate: Sample rate of the audio.
normalize: Flag to normalize volume.
Returns:
pydub.AudioSegment
"""
# Normalize volume to fit in int16
if normalize:
samples *= np.iinfo(np.int16).max / np.max(np.abs(samples))
# Transpose and convert to int16
samples = samples.transpose(1, 0).astype(np.int16)
# Write to the bytes of a WAV file
wav_bytes = io.BytesIO()
wavfile.write(wav_bytes, sample_rate, samples)
wav_bytes.seek(0)
# Read into pydub
return pydub.AudioSegment.from_wav(wav_bytes)
def apply_filters(segment: pydub.AudioSegment, compression: bool = False) -> pydub.AudioSegment:
"""
Apply post-processing filters to the audio segment to compress it and keep at a -10 dBFS level.
Args:
segment: The audio segment to filter.
compression: Flag to apply dynamic range compression.
Returns:
pydub.AudioSegment
"""
if compression:
segment = pydub.effects.normalize(segment, headroom=0.1)
segment = segment.apply_gain(-10 - segment.dBFS)
segment = pydub.effects.compress_dynamic_range(
segment,
threshold=-20.0,
ratio=4.0,
attack=5.0,
release=50.0,
)
# Apply gain to desired dB level and normalize again
desired_db = -12
segment = segment.apply_gain(desired_db - segment.dBFS)
return pydub.effects.normalize(segment, headroom=0.1)
def stitch_segments(segments: Sequence[pydub.AudioSegment], crossfade_s: float) -> pydub.AudioSegment:
"""
Stitch together a sequence of audio segments with a crossfade between each segment.
Args:
segments: Sequence of audio segments to stitch.
crossfade_s: Duration of crossfade in seconds.
Returns:
pydub.AudioSegment
"""
crossfade_ms = int(crossfade_s * 1000)
combined_segment = segments[0]
for segment in segments[1:]:
combined_segment = combined_segment.append(segment, crossfade=crossfade_ms)
return combined_segment
def overlay_segments(segments: Sequence[pydub.AudioSegment]) -> pydub.AudioSegment:
"""
Overlay a sequence of audio segments on top of each other.
Args:
segments: Sequence of audio segments to overlay.
Returns:
pydub.AudioSegment
"""
assert len(segments) > 0
output: pydub.AudioSegment = segments[0]
for segment in segments[1:]:
output = output.overlay(segment)
return output
```
# Step 7: Full Pipeline for Audio Processing with Customization
```
def mel_spectrogram_pipeline(audio_file, output_image='mel_spectrogram.png',
output_audio_griffin='griffin_reconstructed_audio.wav',
output_audio_melgan='melgan_reconstructed_audio.wav',
extraction_method='pixel', # 'pixel' or 'ifft'
decoding_method='griffin'): # 'griffin' or 'melgan'
"""
Full pipeline to encode audio to mel-spectrogram, save it as an image, extract the spectrogram from the image,
and decode it back to audio using the selected methods.
Parameters:
- audio_file: Path to the audio file to be processed.
- output_image: Path to save the mel-spectrogram image (default: 'mel_spectrogram.png').
- output_audio_griffin: Path to save the Griffin-Lim reconstructed audio.
- output_audio_melgan: Path to save the MelGAN reconstructed audio.
- extraction_method: Method for extraction ('pixel' or 'ifft').
- decoding_method: Method for decoding ('griffin' or 'melgan').
"""
# Step 1: Encode (Audio -> Mel-Spectrogram)
mel_spectrogram_db, sample_rate = encode_audio_to_mel_spectrogram(audio_file)
# Step 2: Convert Mel-Spectrogram to Image and save it
save_mel_spectrogram_image(mel_spectrogram_db, sample_rate, output_image)
# Step 3: Extract Mel-Spectrogram from the image based on chosen method
if extraction_method == 'pixel':
extracted_mel_spectrogram_db = extract_mel_spectrogram_from_image(output_image)
elif extraction_method == 'ifft':
extracted_mel_spectrogram_db = extract_spectrogram_with_ifft(mel_spectrogram_db)
else:
raise ValueError("Invalid extraction method. Choose 'pixel' or 'ifft'.")
# Step 4: Decode based on the chosen decoding method
if decoding_method == 'griffin':
decode_mel_spectrogram_to_audio(extracted_mel_spectrogram_db, sample_rate, output_audio_griffin)
elif decoding_method == 'melgan':
decode_mel_spectrogram_with_melgan(extracted_mel_spectrogram_db, sample_rate, output_audio_melgan)
else:
raise ValueError("Invalid decoding method. Choose 'griffin' or 'melgan'.")
```
# Example usage
```
if __name__ == "__main__":
audio_file_path = 'your_audio_file.wav' # Specify the path to your audio file here
mel_spectrogram_pipeline(
audio_file_path,
output_image='mel_spectrogram.png',
output_audio_griffin='griffin_reconstructed_audio.wav',
output_audio_melgan='melgan_reconstructed_audio.wav',
extraction_method='pixel', # Choose 'pixel' or 'ifft'
decoding_method='griffin' # Choose 'griffin' or 'melgan'
)
```
This model is part of the Spydaz Web AGI Project, a long-term initiative to build autonomous, multimodal, emotionally-aware AGI systems with fully internalized cognitive frameworks.
If your goal is to push boundaries in reasoning, decision-making, or intelligent tooling — this model is your launchpad.
|
CreeperAhAh/Qwen3-4b-instruct-neko
|
CreeperAhAh
| 2025-09-17T08:04:52Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:38:37Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pinktulip888/qwen-cat-replace-70-23-13
|
pinktulip888
| 2025-09-17T08:04:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:03:53Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pinktulip888
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
uzlm/Llama-3.2-3B-Instruct-Uz
|
uzlm
| 2025-09-17T08:03:22Z | 21 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"uzbek",
"uzbekllm",
"uzbeknlp",
"translation",
"summarization",
"question-answering",
"tokenizer",
"conversational",
"uz",
"en",
"dataset:HuggingFaceFW/fineweb-2",
"dataset:tahrirchi/uz-crawl",
"dataset:yakhyo/uz-wiki",
"dataset:wikipedia",
"dataset:tatsu-lab/alpaca",
"dataset:behbudiy/alpaca-cleaned-uz",
"dataset:UAzimov/uzbek-instruct-llm",
"dataset:behbudiy/translation-instruction",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-03T10:08:39Z |
---
license: llama3.2
language:
- uz
- en
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
tags:
- llama
- uzbek
- uzbekllm
- uzbeknlp
- text-generation
- translation
- summarization
- question-answering
- tokenizer
datasets:
- HuggingFaceFW/fineweb-2
- tahrirchi/uz-crawl
- yakhyo/uz-wiki
- wikipedia
- tatsu-lab/alpaca
- behbudiy/alpaca-cleaned-uz
- UAzimov/uzbek-instruct-llm
- behbudiy/translation-instruction
metrics:
- bleu
- comet
- accuracy
pipeline_tag: text-generation
---
### Model Description
This is the 3B parameter version of our Uzbek-optimized Llama series. Also, check out our other models:
* **[Llama-3.2-1B-Instruct-Uz](https://huggingface.co/beruniy/Llama-3.2-1B-Instruct-Uz)**
* **[Llama-3.1-8B-Instruct-Uz](https://huggingface.co/beruniy/Llama-3.1-8B-Instruct-Uz)**
---
Our **Llama-3.2-3B-Instruct-uz** model has been continually pretrained with context length of 2048 tokens, on 2.4B tokens (75% English, 25% Uzbek), then SFT fine-tuned. Our customized tokenizer averages 1.7 tokens per Uzbek word vs. ~3.5 in the original Llama models, meaning 2x faster inference and longer effective context length on Uzbek text. You’ll be able to run this model on just 2 GB of VRAM (with quantization), perfect for small GPUs, edge devices, or even mobile scenarios.
---
### Benchmarks 1B, 3B
| Model | BLEU Uz→En (Zero_shot) | BLEU En→Uz (Zero_shot) | COMET Uz→En | COMET En→Uz | Uzbek Sentiment Analysis | Uzbek News Classification | MMLU (English) (Zero_shot) |
| --------------------------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: |
| **[Llama-3.2 1B Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)** | 3.62 | 0.44 | 56.72 | 35.52 | 54.77 | 42.16 | 38.15 |
| **[Llama-3.2 1B Instruct Uz](https://huggingface.co/beruniy/Llama-3.2-1B-Instruct-uz)** | 16.64 | 10.20 | 81.42 | 82.73 | 63.49 | 10.75 | 26.29 |
| **[Llama-3.2 3B Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)** | 11.91 | 2.54 | 71.96 | 55.62 | 56.01 | 70.60 | 52.04 |
| **[Llama-3.2 3B Instruct Uz](https://huggingface.co/beruniy/Llama-3.2-3B-Instruct-Uz)** | 25.19 | 14.66 | 85.08 | 86.82 | 81.64 | 41.56 | 45.91 |
### Benchmarks 8B
| Model | BLEU Uz→En (Zero_shot) | BLEU En→Uz (Zero_shot) | COMET Uz→En | COMET En→Uz | Uzbek Sentiment Analysis | Uzbek News Classification | MMLU (English) (Zero_shot) |
| --------------------------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: |
| **[Llama-3.1 8B Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)** | 24.23 | 8.28 | 83.12 | 82.22 | 69.77 | 73.63 | 60.59 |
| **[Behbudiy Mistral 7B Uz](https://huggingface.co/behbudiy/Mistral-7B-Instruct-Uz)** | 28.09 | 15.96 | 86.26 | 88.42 | 83.41 | 55.51 | 47.09 |
| **[Behbudiy Llama 8B Uz](https://huggingface.co/behbudiy/Llama-3.1-8B-Instruct-Uz)** | 27.08 | 13.29 | 84.76 | 85.62 | 81.66 | 68.22 | 59.18 |
| **[Llama-3.1 8B Instruct Uz](https://huggingface.co/beruniy/Llama-3.1-8B-Instruct-Uz)** | 31.16 | 15.58 | 87.24 | 87.64 | 82.66 | 65.65 | 53.35 |
<!-- | **[Behbudiy Nemo 12B Uz](https://huggingface.co/behbudiy/Mistral-Nemo-Instruct-Uz)** | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -->
The results show that our Uzbek-optimized models consistently outperform their base counterparts in translation benchmarks (BLEU and COMET) on the FLORES+ Uz-En / En-Uz evaluation datasets and sentiment analysis in Uzbek language. Also, on the MMLU benchmark, which measures general language understanding across multiple tasks in English, and News classification tasks, our Uzbek optimized model showed slight decline because of catastrophic forgetting of original English instruction following. (The official Llama model’s MMLU score may differ from our score due to our evaluation method. Refer to the links below to see evaluation details.)
We’re eager to see how these models will contribute to Uzbek open-source and be used by our Uzbek 🇺🇿 community. 🚀
## How to use
The Llama-3.2-3B-Instruct-uz model can be used with transformers in the following way. We recommend preprocessing Uzbek input to replace apostrophe (') with sequence (APST) to achieve our model's lower tokenizer fertility.
### Use with transformers
```python
import re, torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import langid
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
DTYPE = torch.bfloat16
MODEL_ID = "beruniy/Llama-3.2-3B-Instruct-uz"
PATTERN = r"[’‘‚‛ʻʼʽʾʿˈˊˋˌˍ'\']"
tok = AutoTokenizer.from_pretrained(MODEL_ID, use_fast=True)
tok.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=DTYPE,
device_map="auto"
)
EOT = "<|eot_id|>"
SYSTEM = (
f"{tok.bos_token}<|start_header_id|>system<|end_header_id|>\n"
"You are a helpful assistant<|eot_id|>"
)
def prompt(user: str) -> str:
return (
SYSTEM +
"<|start_header_id|>user<|end_header_id|>\n" +
f"{user}{EOT}" +
"<|start_header_id|>assistant<|end_header_id|>"
)
def generate(user: str, max_new: int = 256) -> str:
lang, confidence = langid.classify(user)
clean_text = re.sub(PATTERN, "APST", user) if lang != "en" else user
enc = tok(prompt(clean_text), return_tensors="pt").to(DEVICE)
out = model.generate(**enc,
max_new_tokens=max_new,
bos_token_id=tok.bos_token_id,
eos_token_id=tok.convert_tokens_to_ids(EOT),
pad_token_id=tok.pad_token_id,
do_sample=False)
txt = tok.decode(out[0], skip_special_tokens=False)
txt = txt.split("<|start_header_id|>assistant<|end_header_id|>", 1)[1]
return txt.split(EOT, 1)[0].replace("APST", "'").strip()
print(generate("Menga Alisher Navoiy haqida aytib ber."))
```
## Information on Evaluation Method
To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets.
We used the following prompt to do zero-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").
```python
prompt = f"Input: {clean_text} \n\nYour task is to accurately translate the given Uzbek text into English.\n"
"Output only the English translation, without any additional comments.\n"
"\nPlease translate the following Uzbek text into English."
```
To assess the model's ability in Uzbek sentiment analysis, we used the **risqaliyevds/uzbek-sentiment-analysis** dataset (refer to **behbudiy/uzbek-sentiment-analysis** dataset).
We used the following prompt for the evaluation:
```python
prompt = f'''Input: {clean_text} \n\nGiven the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation."
'''
```
For Uzbek News Classification, we used **risqaliyevds/uzbek-zero-shot-classification** dataset and asked the model to predict the category of the news using the following prompt:
```python
prompt = f'''Input: {clean_text}\n\nClassify the given news article in Uzbek.
0 - Siyosat - If the text is about politics.
1 - Iqtisodiyot - If the text is about the economy.
2 - Texnologiya - If the text is about technology.
3 - Sport - If the text is about sports.
4 - Madaniyat - If the text is about culture.
5 - Salomatlik - If the text is about health.
6 - Oila va Jamiyat - If the text is about family and society.
7 - TaAPSTlim - If the text is about education.
8 - Ekologiya - If the text is about ecology.
9 - Xorijiy Yangiliklar - If the text is about foreign news.
Print only one digit ID of the corresponding class.
'''
```
On MMLU, we performed 0-shot evaluation using the following **template** and extracted the first token generated by the model for measuring accuracy:
```python
template = "Given the above question and choices, choose the single best answer (A, B, C, or D). Respond with only one letter..
```
## More
For more details and examples, refer to the base model below:
https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
|
uzlm/Llama-3.2-1B-Instruct-Uz
|
uzlm
| 2025-09-17T08:02:59Z | 25 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"uzbek",
"uzbekllm",
"uzbeknlp",
"translation",
"summarization",
"question-answering",
"tokenizer",
"conversational",
"uz",
"en",
"dataset:HuggingFaceFW/fineweb-2",
"dataset:tahrirchi/uz-crawl",
"dataset:yakhyo/uz-wiki",
"dataset:wikipedia",
"dataset:tatsu-lab/alpaca",
"dataset:behbudiy/alpaca-cleaned-uz",
"dataset:UAzimov/uzbek-instruct-llm",
"dataset:behbudiy/translation-instruction",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-03T09:38:10Z |
---
license: llama3.2
language:
- uz
- en
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
tags:
- llama
- uzbek
- uzbekllm
- uzbeknlp
- text-generation
- translation
- summarization
- question-answering
- tokenizer
datasets:
- HuggingFaceFW/fineweb-2
- tahrirchi/uz-crawl
- yakhyo/uz-wiki
- wikipedia
- tatsu-lab/alpaca
- behbudiy/alpaca-cleaned-uz
- UAzimov/uzbek-instruct-llm
- behbudiy/translation-instruction
metrics:
- bleu
- comet
- accuracy
pipeline_tag: text-generation
---
### Model Description
This is the 1B parameter version of our Uzbek-optimized Llama series. Also, check out our other models:
* **[Llama-3.2-3B-Instruct-Uz](https://huggingface.co/beruniy/Llama-3.2-3B-Instruct-Uz)**
* **[Llama-3.1-8B-Instruct-Uz](https://huggingface.co/beruniy/Llama-3.1-8B-Instruct-Uz)**
---
Our **Llama-3.2-1B-Instruct-uz** model has been continually pretrained with context length of 2048 tokens, on 2.4B tokens (75% English, 25% Uzbek), then SFT fine-tuned. Our customized tokenizer averages 1.7 tokens per Uzbek word vs. ~3.5 in the original Llama models, meaning 2x faster inference and longer effective context length on Uzbek text. You’ll be able to run this model on just 2 GB of VRAM (with quantization), perfect for small GPUs, edge devices, or even mobile scenarios.
---
### Benchmarks 1B, 3B
| Model | BLEU Uz→En (Zero_shot) | BLEU En→Uz (Zero_shot) | COMET Uz→En | COMET En→Uz | Uzbek Sentiment Analysis | Uzbek News Classification | MMLU (English) (Zero_shot) |
| --------------------------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: |
| **[Llama-3.2 1B Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)** | 3.62 | 0.44 | 56.72 | 35.52 | 54.77 | 42.16 | 38.15 |
| **[Llama-3.2 1B Instruct Uz](https://huggingface.co/beruniy/Llama-3.2-1B-Instruct-uz)** | 16.64 | 10.20 | 81.42 | 82.73 | 63.49 | 10.75 | 26.29 |
| **[Llama-3.2 3B Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)** | 11.91 | 2.54 | 71.96 | 55.62 | 56.01 | 70.60 | 52.04 |
| **[Llama-3.2 3B Instruct Uz](https://huggingface.co/beruniy/Llama-3.2-3B-Instruct-Uz)** | 25.19 | 14.66 | 85.08 | 86.82 | 81.64 | 41.56 | 45.91 |
### Benchmarks 8B
| Model | BLEU Uz→En (Zero_shot) | BLEU En→Uz (Zero_shot) | COMET Uz→En | COMET En→Uz | Uzbek Sentiment Analysis | Uzbek News Classification | MMLU (English) (Zero_shot) |
| --------------------------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: |
| **[Llama-3.1 8B Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)** | 24.23 | 8.28 | 83.12 | 82.22 | 69.77 | 73.63 | 60.59 |
| **[Behbudiy Mistral 7B Uz](https://huggingface.co/behbudiy/Mistral-7B-Instruct-Uz)** | 28.09 | 15.96 | 86.26 | 88.42 | 83.41 | 55.51 | 47.09 |
| **[Behbudiy Llama 8B Uz](https://huggingface.co/behbudiy/Llama-3.1-8B-Instruct-Uz)** | 27.08 | 13.29 | 84.76 | 85.62 | 81.66 | 68.22 | 59.18 |
| **[Llama-3.1 8B Instruct Uz](https://huggingface.co/beruniy/Llama-3.1-8B-Instruct-Uz)** | 31.16 | 15.58 | 87.24 | 87.64 | 82.66 | 65.65 | 53.35 |
<!-- | **[Behbudiy Nemo 12B Uz](https://huggingface.co/behbudiy/Mistral-Nemo-Instruct-Uz)** | 0 | 0 | 0 | 0 | 0 | 0 | 0 | -->
The results show that our Uzbek-optimized models consistently outperform their base counterparts in translation benchmarks (BLEU and COMET) on the FLORES+ Uz-En / En-Uz evaluation datasets and sentiment analysis in Uzbek language. Also, on the MMLU benchmark, which measures general language understanding across multiple tasks in English, and News classification tasks, our Uzbek optimized model showed slight decline because of catastrophic forgetting of original English instruction following. (The official Llama model’s MMLU score may differ from our score due to our evaluation method. Refer to the links below to see evaluation details.)
We’re eager to see how these models will contribute to Uzbek open-source and be used by our Uzbek 🇺🇿 community. 🚀
## How to use
The Llama-3.2-1B-Instruct-uz model can be used with transformers in the following way. We recommend preprocessing Uzbek input to replace apostrophe (') with sequence (APST) to achieve our model's lower tokenizer fertility.
### Use with transformers
```python
import re, torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import langid
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
DTYPE = torch.bfloat16
MODEL_ID = "beruniy/Llama-3.2-1B-Instruct-uz"
PATTERN = r"[’‘‚‛ʻʼʽʾʿˈˊˋˌˍ'\']"
tok = AutoTokenizer.from_pretrained(MODEL_ID, use_fast=True)
tok.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=DTYPE,
device_map="auto"
)
EOT = "<|eot_id|>"
SYSTEM = (
f"{tok.bos_token}<|start_header_id|>system<|end_header_id|>\n"
"You are a helpful assistant<|eot_id|>"
)
def prompt(user: str) -> str:
return (
SYSTEM +
"<|start_header_id|>user<|end_header_id|>\n" +
f"{user}{EOT}" +
"<|start_header_id|>assistant<|end_header_id|>"
)
def generate(user: str, max_new: int = 256) -> str:
lang, confidence = langid.classify(user)
clean_text = re.sub(PATTERN, "APST", user) if lang != "en" else user
enc = tok(prompt(clean_text), return_tensors="pt").to(DEVICE)
out = model.generate(**enc,
max_new_tokens=max_new,
bos_token_id=tok.bos_token_id,
eos_token_id=tok.convert_tokens_to_ids(EOT),
pad_token_id=tok.pad_token_id,
do_sample=False)
txt = tok.decode(out[0], skip_special_tokens=False)
txt = txt.split("<|start_header_id|>assistant<|end_header_id|>", 1)[1]
return txt.split(EOT, 1)[0].replace("APST", "'").strip()
print(generate("Menga Alisher Navoiy haqida aytib ber."))
```
## Information on Evaluation Method
To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets.
We used the following prompt to do zero-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").
```python
prompt = f"Input: {clean_text} \n\nYour task is to accurately translate the given Uzbek text into English.\n"
"Output only the English translation, without any additional comments.\n"
"\nPlease translate the following Uzbek text into English."
```
To assess the model's ability in Uzbek sentiment analysis, we used the **risqaliyevds/uzbek-sentiment-analysis** dataset (refer to **behbudiy/uzbek-sentiment-analysis** dataset).
We used the following prompt for the evaluation:
```python
prompt = f'''Input: {clean_text} \n\nGiven the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation."
'''
```
For Uzbek News Classification, we used **risqaliyevds/uzbek-zero-shot-classification** dataset and asked the model to predict the category of the news using the following prompt:
```python
prompt = f'''Input: {clean_text}\n\nClassify the given news article in Uzbek.
0 - Siyosat - If the text is about politics.
1 - Iqtisodiyot - If the text is about the economy.
2 - Texnologiya - If the text is about technology.
3 - Sport - If the text is about sports.
4 - Madaniyat - If the text is about culture.
5 - Salomatlik - If the text is about health.
6 - Oila va Jamiyat - If the text is about family and society.
7 - TaAPSTlim - If the text is about education.
8 - Ekologiya - If the text is about ecology.
9 - Xorijiy Yangiliklar - If the text is about foreign news.
Print only one digit ID of the corresponding class.
'''
```
On MMLU, we performed 0-shot evaluation using the following **template** and extracted the first token generated by the model for measuring accuracy:
```python
template = "Given the above question and choices, choose the single best answer (A, B, C, or D). Respond with only one letter..
```
## More
For more details and examples, refer to the base model below:
https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct
|
satyaprakashmohanty13/D1
|
satyaprakashmohanty13
| 2025-09-17T08:02:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-17T08:02:44Z |
---
title: Sudoku Solver
emoji: ✏️
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 3.48.0
app_file: app.py
pinned: false
---
# Sudoku Solver
This is a Gradio web application that solves Sudoku puzzles from an image.
## How to use
1. Upload an image of a Sudoku puzzle.
2. The application will automatically extract the grid, recognize the digits, and solve the puzzle.
3. The solved Sudoku will be displayed as an image.
## Original Repository
This project is based on the [SolveSudoku](https://github.com/aakashjhawar/SolveSudoku) repository by Aakash Jhawar. I have adapted it to run as a Gradio application on Hugging Face Spaces.
|
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-7-v3_5977
|
luckeciano
| 2025-09-17T08:01:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T03:37:35Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-7-v3_5977
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-7-v3_5977
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-7-v3_5977", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/yg50v30z)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kingkim/yeosu_dooroo_2
|
kingkim
| 2025-09-17T08:00:59Z | 5 | 0 | null |
[
"safetensors",
"qwen3",
"unsloth",
"trl",
"sft",
"dataset:kingkim/yeosu_island",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:mit",
"region:us"
] | null | 2025-09-15T14:08:28Z |
---
license: mit
tags:
- unsloth
- trl
- sft
datasets:
- kingkim/yeosu_island
base_model:
- unsloth/Qwen3-4B-Instruct-2507
---
# 🚀 yeosu_dooroo_2: Qwen3-4B 모델 파인튜닝 여정
본 문서는 `unsloth/Qwen3-4B-Instruct-2507` 모델을 `kingkim/yeosu_island` 데이터셋으로 파인튜닝하는 과정을 기록한 것입니다. 단순한 코드 실행을 넘어, 여러 시행착오를 통해 학습 워크플로우를 개선하고 모델 튜닝의 핵심 개념을 배워나간 과정을 담았습니다.
---
## 📊 최종 모델 성능
- **Training Loss**: 2.9639
- **Evaluation Loss**: 2.9864
- **Epochs Trained**: 1.76
*`eval_loss`는 학습에 사용되지 않은 검증용 데이터에 대한 손실 값으로, 모델의 일반화 성능을 나타내는 핵심 지표입니다. 학습 시작 시점의 loss(약 3.2) 대비 유의미한 감소를 확인했습니다.*
---
## 🛠️ 핵심 기술 스택
- **Base Model**: `unsloth/Qwen3-4B-Instruct-2507`
- **Fine-tuning Library**: `Unsloth` (LoRA-PEFT)
- **Trainer**: `Hugging Face TRL (SFTTrainer)`
- **Dataset**: `kingkim/yeosu_island`
---
## 🌱 성장 기록: 시행착오와 배움의 과정
이 프로젝트는 여러 기술적 난관을 해결하며 완성되었습니다. 특히 다음과 같은 부분에서 중요한 배움을 얻었습니다.
### 1. 반복 가능한 학습 워크플로우 구축
처음에는 단일 스크립트로 전체 학습을 진행하려 했으나, 이는 유연성이 떨어지고 비효율적이었습니다. 여러 번의 논의 끝에, **지속적인 추가 학습이 가능한 통합 스크립트**를 완성했습니다.
핵심 로직은 스크립트 실행 시 **저장된 LoRA 어댑터의 존재 여부를 확인**하는 것입니다.
- **`adapter` 폴더가 있으면**: 저장된 어댑터를 불러와 **이어서 학습**을 재개합니다.
- **`adapter` 폴더가 없으면**: 원본 모델을 불러와 **최초 학습**을 시작합니다.
이 방식을 통해 에포크 수를 조금씩 늘려가며 점진적으로 모델을 개선하고, 언제든 학습을 중단하고 재개할 수 있는 안정적인 워크플로우를 구축할 수 있었습니다.
### 2. 모델 병합 및 업로드의 함정 이해
학습된 LoRA 어댑터를 원본 모델과 병합하고 Hugging Face Hub에 업로드하는 과정에서 중요한 실수를 경험했습니다.
- **문제**: `model.merge_and_unload()`를 호출하여 이미 병합된 모델에, 또다시 병합 기능이 포함된 `model.push_to_hub_merged()`를 사용했습니다.
- **결과**: 라이브러리는 병합할 LoRA 어댑터가 없다고 판단하여 병합을 건너뛰었고, **결과적으로 파인튜닝이 적용되지 않은 원본 모델이 Hub에 업로드**되었습니다.
- **배움**: `model.merge_and_unload()`를 실행한 모델은 더 이상 LoRA 모델이 아닌 **일반 모델**이 된다는 것을 배웠습니다. 따라서 이후에는 일반 모델 업로드 함수인 `model.push_to_hub()`를 사용해야 한다는 것을 명확히 이해하게 되었습니다.
```python
# 올바른 병합 및 업로드 순서
# 1. 학습이 완료된 LoRA 모델을 메모리에서 병합
print("\nLoRA 어댑터를 베이스 모델과 병합합니다...")
model = model.merge_and_unload()
print("병합 완료.")
# 2. 병합된 '일반' 모델이므로, 일반 업로드 함수를 사용
print(f"모델을 '{hf_repo_id}' 레포지토리 B에 업로드합니다...")
model.push_to_hub(hf_repo_id, tokenizer=tokenizer, token=HF_TOKEN)
```
---
## 📜 최종 학습 설정 및 로그
### SFTTrainer 설정값
```python
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=train_dataset, # ✨ train_dataset으로 명확히 지정
eval_dataset=eval_dataset, # ✨ eval_dataset 추가
dataset_text_field="text",
max_seq_length=max_seq_length,
dataset_num_proc=2,
packing=False,
# 'SFTConfig'를 'TrainingArguments'로 변경합니다. 내용은 동일합니다.
args=TrainingArguments(
per_device_train_batch_size=32,
gradient_accumulation_steps=2,
warmup_steps=10,
num_train_epochs=1.75,
learning_rate=4e-6,
bf16=True,
fp16=False,
logging_steps=1,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
seed=3407,
output_dir="./outputs",
report_to="none",
),
)
```
*참고: 위 `SFTTrainer` 설정은 초기 아이디어를 담고 있으며, 실제 학습에서는 `TrainingArguments`와 더 낮은 `learning_rate` 등을 사용하여 안정성을 높였습니다.*
### 최종 학습 로그
```json
{'loss': 2.934, 'grad_norm': 1.4558, 'learning_rate': 7.54e-07, 'epoch': 1.51}
{'loss': 2.7826, 'grad_norm': 1.4755, 'learning_rate': 6.79e-07, 'epoch': 1.54}
{'loss': 2.9578, 'grad_norm': 1.6032, 'learning_rate': 6.03e-07, 'epoch': 1.56}
{'loss': 2.9518, 'grad_norm': 1.8061, 'learning_rate': 5.28e-07, 'epoch': 1.59}
{'loss': 2.8255, 'grad_norm': 1.4130, 'learning_rate': 3.77e-07, 'epoch': 1.65}
{'loss': 2.7983, 'grad_norm': 1.6738, 'learning_rate': 1.50e-07, 'epoch': 1.73}
{'loss': 2.8702, 'grad_norm': 1.4824, 'learning_rate': 7.54e-08, 'epoch': 1.76}
```
**Final Training Stats:**
```json
{
"train_runtime": 448.8736,
"train_samples_per_second": 8.854,
"train_steps_per_second": 0.14,
"train_loss": 2.9639224589817106,
"epoch": 1.76
}
```
### 최종 평가 결과
```json
{
"eval_loss": 2.9864068031311035,
"eval_runtime": 12.4501,
"eval_samples_per_second": 60.803,
"eval_steps_per_second": 7.63,
"epoch": 1.76056338028169
}
```
---
## 📄 License
이 프로젝트는 다음 라이브러리들의 라이선스를 따릅니다.
- `Unsloth`
- `TRL`
- `SFT`
|
Grimster/Taxi-V3
|
Grimster
| 2025-09-17T08:00:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-17T07:59:59Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Grimster/Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SwetaJena/llama-3.2-1B-dolphin_numbers_teacher_13_v0
|
SwetaJena
| 2025-09-17T07:59:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:59:46Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SwetaJena
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Grimster/Q-Taxi
|
Grimster
| 2025-09-17T07:59:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-17T07:59:31Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Grimster/Q-Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sadicko/whisper-small-akan-finetuned
|
sadicko
| 2025-09-17T07:58:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-17T07:57:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Grimster/q-FrozenLake-v1-4x4-noSlippery
|
Grimster
| 2025-09-17T07:57:55Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-17T07:57:51Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Grimster/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
praveenphatate16/gemma-finetuned-2
|
praveenphatate16
| 2025-09-17T07:57:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:40:35Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: gemma-finetuned-2
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-finetuned-2
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="praveenphatate16/gemma-finetuned-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 3.3.2
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/math-virtuoso-7B-GGUF
|
mradermacher
| 2025-09-17T07:57:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:TIGER-Lab/MathInstruct",
"base_model:entfane/math-virtuoso-7B",
"base_model:quantized:entfane/math-virtuoso-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-17T06:29:15Z |
---
base_model: entfane/math-virtuoso-7B
datasets:
- TIGER-Lab/MathInstruct
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/entfane/math-virtuoso-7B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#math-virtuoso-7B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/math-virtuoso-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/math-virtuoso-7B-GGUF/resolve/main/math-virtuoso-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
thevan2404/whisper-medium-ft-10epochs-gameshow
|
thevan2404
| 2025-09-17T07:56:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium.en",
"base_model:finetune:openai/whisper-medium.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-17T03:33:19Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium.en
tags:
- generated_from_trainer
model-index:
- name: whisper-medium-ft-10epochs-gameshow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-ft-10epochs-gameshow
This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.53.3
- Pytorch 2.7.1+cu118
- Datasets 3.6.0
- Tokenizers 0.21.2
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758095708
|
devivodowdlel
| 2025-09-17T07:56:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T07:56:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Barghav777/phi3-lab-report-coder
|
Barghav777
| 2025-09-17T07:53:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"arxiv:2404.14219",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T16:12:21Z |
---
library_name: transformers
license: mit
language:
- en
metrics:
- rouge
base_model:
- microsoft/Phi-3-mini-4k-instruct
pipeline_tag: text-generation
---
# Model Card for **Phi3-Lab-Report-Coder (LoRA on Phi-3 Mini 4k Instruct)**
A lightweight LoRA-adapter fine-tune of `microsoft/Phi-3-mini-4k-instruct` for **turning structured lab contexts + observations into executable Python code** that performs the target calculations (e.g., mechanics, fluids, vibrations, basic circuits, titrations). Trained with QLoRA in 4-bit, this model is intended as an **assistive code generator** for STEM lab writeups and teaching demos—not as a certified calculator for safety-critical engineering.
---
## Model Details
### Model Description
- **Developed by:** Barghav777
- **Model type:** Causal decoder LM (instruction-tuned) + **LoRA adapter**
- **Languages:** English
- **License:** MIT
- **Finetuned from:** `microsoft/Phi-3-mini-4k-instruct`
- **Intended input format:** A structured prompt with:
- `### CONTEXT:` (natural-language description of the experiment)
- `### OBSERVATIONS:` (JSON-like dict with units, readings)
- `### CODE:` (the model is trained to generate the Python solution after this tag)
### Model Sources
- **Base model:** `microsoft/Phi-3-mini-4k-instruct`
- **Training data files:** `train.jsonl` (37 items), `eval.jsonl` (6 items)
- **Demo/Colab basis:** Training notebook available at: https://github.com/Barghav777/AI-Lab-Report-Agent
---
## Uses
### Direct Use
- Generate **readable Python code** to compute derived quantities from lab observations (e.g., average \(g\) via pendulum, Coriolis acceleration, Ohm’s law resistances, radius of gyration, Reynolds number).
- Produce calculation pipelines with minimal plotting/printing that are easy to copy-paste and run in a notebook.
### Downstream Use
- Course assistants or lab-prep tools that auto-draft calculation code for **intro undergrad physics/mech/fluids/EE labs**.
- Auto-checkers that compare student code vs. a reference implementation (with appropriate guardrails).
### Out-of-Scope Use
- Any **safety-critical** design decisions (structural, medical, chemical process control).
- High-stakes computation without human verification.
- Domains far outside the training distribution (e.g., NLP preprocessing pipelines, advanced control systems, large-scale simulation frameworks).
---
## Bias, Risks, and Limitations
- **Small dataset (37 train / 6 eval)** → plausible overfitting; brittle generalization to unseen experiment formats.
- **Formula misuse risk:** The model may pick incorrect constants/units or silently use wrong equations.
- **Overconfidence:** Generated code may “look right” while being numerically off or unit-inconsistent.
- **JSON brittleness:** If `OBSERVATIONS` keys/units differ from training patterns, the code may break.
### Recommendations
- Always **review formulas and units**; add assertions/unit conversions in downstream systems.
- Run generated code with **test observations** and compare against hand calculations.
- For deployment, wrap outputs with **explanations and references** to the formulas used.
---
## How to Get Started
**Prompt template used in training**
```text
### CONTEXT:
{context}
### OBSERVATIONS:
{observations}
### CODE:
```
**Load base + LoRA adapter (recommended)**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, TextStreamer
from peft import PeftModel
import torch
base_id = "microsoft/Phi-3-mini-4k-instruct"
adapter_id = "YOUR_ADAPTER_REPO_OR_LOCAL_PATH" # e.g., ./phi3-lab-report-coder-final
bnb = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=False)
tok = AutoTokenizer.from_pretrained(base_id, trust_remote_code=True)
tok.pad_token = tok.eos_token
base = AutoModelForCausalLM.from_pretrained(base_id, quantization_config=bnb,
trust_remote_code=True, device_map="auto")
model = PeftModel.from_pretrained(base, adapter_id)
model.eval()
prompt = """### CONTEXT:
Experiment to determine acceleration due to gravity using a simple pendulum...
### OBSERVATIONS:
{'readings': [{'L':0.50,'T':1.42}, {'L':0.60,'T':1.55}], 'unit_L':'m', 'unit_T':'s'}
### CODE:
"""
inputs = tok(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tok, skip_prompt=True, skip_special_tokens=True)
_ = model.generate(**inputs, max_new_tokens=400, temperature=0.2, do_sample=False, streamer=streamer)
```
---
## Training Details
### Data
- **Files:** `train.jsonl` (list of objects), `eval.jsonl` (list of objects)
- **Schema per example:**
- `context` *(str)*: experiment description
- `observations` *(dict)*: units + numeric readings (lists of dicts)
- `code` *(str)*: reference Python solution
- **Topical spread (non-exhaustive):** pendulum \(g\), Ohm’s law, titration, density via displacement, Coriolis accel., gyroscopic effect, Hartnell governor, rotating mass balancing, helical spring vibration, bi-filar suspension, etc.
**Size & basic stats**
- Train: **37** items; Eval: **6** items
- Formatted prompt (context+observations+code) length (train):
- mean ≈ **222** words (≈ **1,739** chars); 95th pct ≈ **311** words
- Reference code length (train):
- mean ≈ **34** lines (min **9**, max **71**)
### Training Procedure (from notebook)
- **Approach:** QLoRA (4-bit) SFT using `trl.SFTTrainer`
- **Quantization:** `bitsandbytes` 4-bit `nf4`, compute dtype `bfloat16`
- **LoRA config:** `r=16`, `alpha=32`, `dropout=0.05`, `bias="none"`, targets = `q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj`
- **Tokenizer:** right padding; `eos_token` as `pad_token`
- **Hyperparameters (TrainingArguments):**
- epochs: **10**
- per-device train batch size: **1**
- gradient_accumulation_steps: **4**
- optimizer: **paged_adamw_32bit**
- learning rate: **2e-4**, weight decay: **1e-3**
- warmup_ratio: **0.03**, scheduler: **constant**
- bf16: **True** (fp16: False), group_by_length: True
- logging_steps: 10, save/eval every 50 steps
- report_to: tensorboard
- **Saving:** `trainer.save_model("./phi3-lab-report-coder-final")` (adapter folder)
### Speeds, Sizes, Times
- **Hardware:** Google Colab **T4 GPU** (per notebook metadata)
- **Adapter artifact:** LoRA weights only (load with the base model).
- **Wall-clock time:** not logged in the notebook.
---
## Evaluation
### Testing Data, Factors & Metrics
- **Eval set:** `eval.jsonl` (**6** items) with same schema.
- **Primary metric (planned):** ROUGE-L / ROUGE-1 against reference `code` (proxy for surface similarity).
- **Recommended additional checks:** unit tests on numeric outputs; pyflakes/ruff for syntax; run-time assertions.
### Results
- No automated score recorded in the notebook.
- **Suggested protocol:**
1) Generate code for each eval item using the same prompt template.
2) Execute safely in a sandbox with provided observations.
3) Compare computed scalars (e.g., average \(g\), \(R\), Reynolds number) to ground truth tolerances.
4) Report pass rate and ROUGE for readability/similarity.
---
## Model Examination (optional)
- Inspect token-by-token attention to `OBSERVATIONS` keys (ablation: shuffle keys to test robustness).
- Add **unit-check helpers** (e.g., `pint`) in prompts to encourage explicit conversions.
---
## Environmental Impact
- **Hardware Type:** NVIDIA T4 (Colab)
- **Precision:** 4-bit QLoRA with `bfloat16` compute
- **Hours used:** Not recorded (dataset is small; expected low)
- **Cloud Provider/Region:** Colab (unspecified)
- **Carbon Emitted:** Not estimated (see [ML CO2 Impact calculator](https://mlco2.github.io/impact#compute))
---
## Technical Specifications
### Architecture & Objective
- **Backbone:** `Phi-3-mini-4k-instruct` (decoder-only causal LM)
- **Objective:** Supervised fine-tuning to continue from `### CODE:` with correct, executable Python.
### Compute Infrastructure
- **Hardware:** Colab GPU (T4) + CPU RAM
- **Software:**
- `transformers`, `trl`, `peft`, `bitsandbytes`, `datasets`, `accelerate`, `torch`
---
## Citation
@article{abdin2024phi3,
title = {Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone},
author = {Abdin, Marah and others},
journal = {arXiv preprint arXiv:2404.14219},
year = {2024},
doi = {10.48550/arXiv.2404.14219},
url = {https://arxiv.org/abs/2404.14219}
}
---
## Glossary
- **QLoRA:** Fine-tuning with low-rank adapters on a quantized base model (saves memory/compute).
- **LoRA (r, α):** Rank and scaling of low-rank update matrices.
---
## More Information
- For better robustness, consider augmenting data with **unit-perturbation** and **noise-in-readings** variants, and add examples across more domains (materials, thermo, optics).
- Add **eval harness** with numeric tolerances and syntax checks.
---
## Model Card Authors
- Barghav777
---
|
raghu96/merged_paligemma2-10b-mix-224-2025-09-17-07-49-54
|
raghu96
| 2025-09-17T07:52:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"paligemma",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-17T07:49:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phucnguyen4499/LLM_implement2.3_alpha64
|
phucnguyen4499
| 2025-09-17T07:52:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:52:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Reihaneh/wav2vec2_fy_nl_50_epochs_6
|
Reihaneh
| 2025-09-17T07:52:12Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:52:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tandeshao/Qwen3-case-instruct
|
tandeshao
| 2025-09-17T07:50:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:50:05Z |
---
base_model: unsloth/Qwen3-4B-Instruct-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tandeshao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_5357
|
luckeciano
| 2025-09-17T07:48:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T03:20:04Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_5357
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_5357
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-6-v3_5357", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/mn7inaca)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758095100
|
devivodowdlel
| 2025-09-17T07:46:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T07:45:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
starsfriday/Wan2.2-I2V-KungFu
|
starsfriday
| 2025-09-17T07:46:17Z | 12 | 1 |
diffusers
|
[
"diffusers",
"lora",
"template:diffusion-lora",
"image-to-video",
"en",
"base_model:Wan-AI/Wan2.2-I2V-A14B",
"base_model:adapter:Wan-AI/Wan2.2-I2V-A14B",
"license:apache-2.0",
"region:us"
] |
image-to-video
| 2025-09-16T03:39:21Z |
---
license: apache-2.0
language:
- en
base_model:
- Wan-AI/Wan2.2-I2V-A14B
pipeline_tag: image-to-video
tags:
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
一个小孩双脚直立,双臂灵活张开,时而抬手,然后转身朝向左边,时而踢腿,做着一系列打拳动作,wugong
output:
url: result/output1.mp4
- text: >-
一个小动物双脚直立,双臂灵活张开,时而抬手,然后转身朝向左边,时而踢腿,做着一系列打拳动作,wugong
output:
url: result/output2.mp4
- text: >-
一个小动物双脚直立,双臂灵活张开,时而抬手,然后转身朝向左边,时而踢腿,做着一系列打拳动作,wugong
output:
url: result/output3.mp4
- text: >-
一个男人双脚直立,双臂灵活张开,时而抬手,然后转身朝向左边,时而踢腿,做着一系列打拳动作,wugong
output:
url: result/output4.mp4
- text: >-
一个女人双脚直立,双臂灵活张开,时而抬手,然后转身朝向左边,时而踢腿,做着一系列打拳动作,wugong
output:
url: result/output5.mp4
- text: >-
一个小孩双脚直立,双臂灵活张开,时而抬手,然后转身朝向左边,时而踢腿,做着一系列打拳动作,wugong
output:
url: result/output6.mp4
---
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<h1 style="color: #24292e; margin-top: 0;">starsfriday LoRA for Wan2.2-T2V-A14B</h1>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Overview</h2>
<p>This LoRA is trained on the Wan2.2-I2V-A14B model.</p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Features</h2>
<ul style="margin-bottom: 0;">
<li>Generate any life videos of eva through prompt</li>
<li>Trained on the Wan2.2-I2V-A14B base model</li>
<li>Consistent results across different object types</li>
<li>Simple prompt structure that's easy to adapt</li>
</ul>
</div>
<Gallery />
# Model File and Inference Workflow
## 📥 Download Links:
- [wan2.2_wugong_i2v_high](./wan2.2_wugong_i2v_high.safetensors) - LoRA Model File
- [wan2.2_wugong_i2v_low](./wan2.2_wugong_i2v_low.safetensors) - LoRA Model File
---
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Recommended Settings</h2>
<ul style="margin-bottom: 0;">
<li><b>LoRA Strength:</b> 1.0</li>
<li><b>Embedded Guidance Scale:</b> 1.0</li>
<li><b>Flow Shift:</b> 8.0</li>
</ul>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Trigger Words</h2>
<p>The key trigger phrase is: <code style="background-color: #f0f0f0; padding: 3px 6px; border-radius: 4px;">wugong</code></p>
</div>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h2 style="color: #24292e; margin-top: 0;">Prompt Template</h2>
<p>For best results, use this prompt structure:</p>
<div style="background-color: #f0f0f0; padding: 12px; border-radius: 6px; margin: 10px 0;">
<i>一个小孩双脚直立,双臂灵活张开,时而抬手,然后转身朝向左边,时而踢腿,做着一系列打拳动作,wugong</i>
</div>
</div>
|
b00l26/Taxi-v3
|
b00l26
| 2025-09-17T07:44:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-17T07:44:19Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="b00l26/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KFC-F200/END_FM_DARE_0.7
|
KFC-F200
| 2025-09-17T07:42:34Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:aabberick/Llama3.2-3b-H200-End_16bit",
"base_model:merge:aabberick/Llama3.2-3b-H200-End_16bit",
"base_model:aabberick/Llama3.2-3b-H200FamilyM",
"base_model:merge:aabberick/Llama3.2-3b-H200FamilyM",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:merge:unsloth/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T02:26:03Z |
---
base_model:
- aabberick/Llama3.2-3b-H200-End_16bit
- aabberick/Llama3.2-3b-H200FamilyM
- unsloth/Llama-3.2-3B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# dare_0.7
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [aabberick/Llama3.2-3b-H200-End_16bit](https://huggingface.co/aabberick/Llama3.2-3b-H200-End_16bit)
* [aabberick/Llama3.2-3b-H200FamilyM](https://huggingface.co/aabberick/Llama3.2-3b-H200FamilyM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: unsloth/Llama-3.2-3B-Instruct
# No parameters necessary for base model
- model: aabberick/Llama3.2-3b-H200-End_16bit
parameters:
density: 0.7
weight: 0.5
- model: aabberick/Llama3.2-3b-H200FamilyM
parameters:
density: 0.7
weight: 0.5
merge_method: dare_linear
base_model: unsloth/Llama-3.2-3B-Instruct
parameters:
int8_mask: false
dtype: bfloat16
```
|
quansuv/qwen2-7b-instruct-trl-sft-ChartQA
|
quansuv
| 2025-09-17T07:39:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-13T06:24:09Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="quansuv/qwen2-7b-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
gouki510/lamma3-8b-dolphin_numbers
|
gouki510
| 2025-09-17T07:38:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:38:35Z |
---
base_model: unsloth/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** gouki510
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sssssungjae/Qwen2.5-7B-instruct-finance-model_stock_v3
|
sssssungjae
| 2025-09-17T07:37:47Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"merge",
"mergekit",
"lazymergekit",
"region:us"
] | null | 2025-09-17T07:34:49Z |
---
tags:
- merge
- mergekit
- lazymergekit
---
# Qwen2.5-7B-instruct-finance-model_stock_v3
Qwen2.5-7B-instruct-finance-model_stock_v3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
merge_method: model_stock
base_model: Qwen/Qwen2.5-7B
models:
- model: fblgit/cybertron-v4-qw7B-MGS
- model: sssssungjae/qwen2_5-7b-instruct-finance-full-final-15_15
- model: sethuiyer/Qwen2.5-7B-Anvita
parameters:
filter_wise: false
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "sssssungjae/Qwen2.5-7B-instruct-finance-model_stock_v3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
lengocquangLAB/phobert-large-job-title-match
|
lengocquangLAB
| 2025-09-17T07:37:28Z | 0 | 0 | null |
[
"safetensors",
"roberta",
"region:us"
] | null | 2025-09-17T04:02:45Z |
---
{}
---
# PhoBERT Large Fine-tuned for Job Title Matching
This model is a **PhoBERT-Large** fine-tuned with **LoRA** for sequence classification.
It is designed to **predict if 2 job titles mention the same position**.
## Training Details
- **Base model:** vinai/phobert-large
- **Task:** Sequence Classification (2 labels: match / no match)
- **LoRA config:** r=16, lora_alpha=32, target_modules=["query", "key", "value", "output.dense"], lora_dropout=0.05
- **Optimizer:** AdamW, learning rate 2e-4
- **Batch size:** 64 (with gradient accumulation 4)
- **Epochs:** 8
- **Weight decay:** 0.01
- **Mixed precision:** fp16
- **Metrics:** F1-score
- **Framework:** HuggingFace Transformers + PEFT
- **Logging & tracking:** WandB (`report_to=["wandb"]`)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Load tokenizer and model from Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained("lengocquangLAB/phobert-large-job-title-match")
model = AutoModelForSequenceClassification.from_pretrained("lengocquangLAB/phobert-large-job-title-match")
# Example input: Job title vs Job title
inputs = tokenizer("AI Engineer", "Machine Learning Engineer", return_tensors="pt")
outputs = model(**inputs)
# Prediction: 0 = no match, 1 = match
pred = outputs.logits.argmax(dim=-1)
print(pred)
|
twelvehertz/open-o3-sft-2
|
twelvehertz
| 2025-09-17T07:37:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-14B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-14B-Instruct",
"region:us"
] | null | 2025-09-17T07:36:58Z |
---
base_model: unsloth/Qwen2.5-14B-Instruct
library_name: peft
tags:
- base_model:adapter:unsloth/Qwen2.5-14B-Instruct
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
Beinsezii/GLM-4.5-Air-Q4F-Q8A-Q8SH-GGUF
|
Beinsezii
| 2025-09-17T07:36:32Z | 0 | 0 | null |
[
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-17T06:36:27Z |
---
license: mit
---
- Q4F : Q4_K feed-forawrd (Q5_1 for ffn_down due to shape constraints)
- Q8A : Q8_0 attention, Q8_0 output, Q8_0 embeds
- Q8SH : Q8_0 shared experts
Readable speeds on a 24GiB GPU + 64GB RAM
|
woutut/qwen-ph-7b-coder-instruct-v2_16bit
|
woutut
| 2025-09-17T07:36:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:35:53Z |
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** woutut
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
luerhard/PopBERT
|
luerhard
| 2025-09-17T07:36:00Z | 1,107 | 6 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-11T11:49:42Z |
---
license: mit
language:
- de
pipeline_tag: text-classification
metrics:
- f1
library_name: transformers
---
# PopBERT
PopBERT is a model for German-language populism detection in political speeches within the German Bundestag, based on the deepset/gbert-large model: https://huggingface.co/deepset/gbert-large
It is a multilabel model trained on a manually curated dataset of sentences from the 18th and 19th legislative periods.
In addition to capturing the foundational dimensions of populism, namely "anti-elitism" and "people-centrism," the model was also fine-tuned to identify the underlying ideological orientation as either "left-wing" or "right-wing."
Helpful code and analyses are stored in a GitHub repo: [github.com/luerhard/PopBERT](https://github.com/luerhard/PopBERT)
# Prediction
The model outputs a Tensor of length 4.
The table connects the position of the predicted probability to its dimension.
| **Index** | **Dimension** |
|-----------|--------------------------|
| 0 | Anti-Elitism |
| 1 | People-Centrism |
| 2 | Left-Wing Host-Ideology |
| 3 | Right-Wing Host-Ideology |
# Usage Example
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained("luerhard/PopBERT")
# load model
model = AutoModelForSequenceClassification.from_pretrained("luerhard/PopBERT")
# define text to be predicted
text = (
"Das ist Klassenkampf von oben, das ist Klassenkampf im Interesse von "
"Vermögenden und Besitzenden gegen die Mehrheit der Steuerzahlerinnen und "
"Steuerzahler auf dieser Erde."
)
# encode text with tokenizer
encodings = tokenizer(text, return_tensors="pt")
# predict
with torch.inference_mode():
out = model(**encodings)
# get probabilties
probs = torch.nn.functional.sigmoid(out.logits)
print(probs.detach().numpy())
```
```
[[0.8765146 0.34838045 0.983123 0.02148379]]
```
# Performance
To maximize performance, it is recommended to use the following thresholds per dimension:
```
[0.415961, 0.295400, 0.429109, 0.302714]
```
Using these thresholds, the model achieves the following performance on the test set:
| Dimension | Precision | Recall | F1 |
|---------------------|---------------|---------------|---------------|
| Anti-Elitism | 0.81 | 0.88 | 0.84 |
| People-Centrism | 0.70 | 0.73 | 0.71 |
| Left-Wing Ideology | 0.69 | 0.77 | 0.73 |
| Right-Wing Ideology | 0.68 | 0.66 | 0.67 |
| --- | --- | --- | --- |
| micro avg | 0.75 | 0.80 | 0.77 |
| macro avg | 0.72 | 0.76 | 0.74 |
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758094482
|
devivodowdlel
| 2025-09-17T07:35:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T07:35:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pavan01729/llama-8B-medical-alpaca
|
pavan01729
| 2025-09-17T07:35:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:tatsu-lab/alpaca",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-17T07:35:37Z |
---
library_name: peft
tags:
- generated_from_trainer
datasets:
- tatsu-lab/alpaca
base_model: meta-llama/Meta-Llama-3.1-8B
model-index:
- name: root/outputs/fine_tuned_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: tatsu-lab/alpaca
type: alpaca
format: csv
prompt_template: '### Instruction: {instruction}
### Input: {input}
### Response: {output}'
dataset_prepared_path: null
val_set_size: 0.1
output_dir: /root/outputs/fine_tuned_model
adapter: qlora
lora_model_dir: null
sequence_len: 2048
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
lora_r: 16
lora_alpha: 8
lora_dropout: 0.05
lora_target_modules: null
lora_target_linear: true
lora_fan_in_fan_out: null
wandb_project: null
wandb_entity: null
wandb_watch: null
wandb_name: null
wandb_log_model: null
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 10
max_steps: 10000000
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16: null
tf32: false
gradient_checkpointing: true
early_stopping_patience: 3
save_strategy: steps
save_steps: 20
evaluation_strategy: steps
eval_steps: 20
load_best_model_at_end: true
save_total_limit: 3
metric_for_best_model: loss
greater_is_better: false
resume_from_checkpoint: null
local_rank: null
logging_steps: 1
xformers_attention: null
flash_attention: true
warmup_steps: 10
debug: null
deepspeed: null
weight_decay: 0.0
fsdp: null
fsdp_config: null
special_tokens:
pad_token: <|end_of_text|>
mlflow_tracking_uri: https://mlflow-dev.qpiai-pro.tech
mlflow_experiment_name: llama-8B-medical-alpaca
hf_mlflow_log_artifacts: 'true'
local_files_only: true
```
</details><br>
# root/outputs/fine_tuned_model
This model was trained from scratch on the tatsu-lab/alpaca dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 5950
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5158 | 0.0017 | 1 | 2.6621 |
| 2.326 | 0.0335 | 20 | 2.2586 |
| 2.0957 | 0.0671 | 40 | 2.2004 |
| 2.0924 | 0.1006 | 60 | 2.1796 |
| 2.1954 | 0.1341 | 80 | 2.1625 |
| 2.1584 | 0.1676 | 100 | 2.1508 |
| 2.2213 | 0.2012 | 120 | 2.1299 |
| 2.0102 | 0.2347 | 140 | 2.1306 |
| 2.1419 | 0.2682 | 160 | 2.1169 |
| 1.8357 | 0.3018 | 180 | 2.1133 |
| 2.0238 | 0.3353 | 200 | 2.1090 |
| 2.0338 | 0.3688 | 220 | 2.1089 |
| 2.0982 | 0.4023 | 240 | 2.0969 |
| 2.0284 | 0.4359 | 260 | 2.0978 |
| 2.0016 | 0.4694 | 280 | 2.0961 |
| 2.0652 | 0.5029 | 300 | 2.0866 |
| 2.0064 | 0.5365 | 320 | 2.0939 |
| 2.1175 | 0.5700 | 340 | 2.0795 |
| 1.943 | 0.6035 | 360 | 2.0803 |
| 2.0691 | 0.6370 | 380 | 2.0861 |
| 1.8928 | 0.6706 | 400 | 2.0775 |
| 2.0693 | 0.7041 | 420 | 2.0903 |
| 2.2198 | 0.7376 | 440 | 2.0779 |
| 1.7801 | 0.7712 | 460 | 2.0808 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.21.0
|
starriver030515/Qwen2.5-Math-1.5B-16k
|
starriver030515
| 2025-09-17T07:34:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:10:05Z |
---
license: mit
library_name: transformers
pipeline_tag: text-generation
---
The base Qwen2.5-Math-1.5B model used by HAPO.
We change to rope_theta from 10000 to 40000 and extend the context window to 16k.
Also, we modify the chat_template for the system prompt and add <think>.
# Citation
If you find our model, data, or evaluation code useful, please kindly cite our paper:
```bib
```
|
phucnguyen4499/LLM_implement2.3_rank16
|
phucnguyen4499
| 2025-09-17T07:32:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:32:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JobixAi/tts-pipeline-20250917_071823
|
JobixAi
| 2025-09-17T07:31:32Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-17T07:30:41Z |
This model finetunes the pretrained model `canopylabs/orpheus-3b-0.1-pretrained` using the finetuning pipeline. Full finetuning with Unsloth for 1 epochs.
**Finetune ID**: `9aacfc20-4f37-4f9b-90c9-00fcb75a7113`
### Datasets
`JobixAi/mindy-higgs-metadata_1-20250917-061820`
`JobixAi/bob-higgs-metadata_2-20250917-065802`
`JobixAi/erin-higgs-metadata_3-20250917-071918`
### Inference
```bash
temperature = 0.7
top_p = 0.9
repetition_penalty = 1.1
```
|
Shaimaz/Gemma-3-Emotion-Sensitive-Tutor-v1
|
Shaimaz
| 2025-09-17T07:31:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:30:49Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Shaimaz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
syso7722/Qwen3-1.7B-Base-MED-Instruct
|
syso7722
| 2025-09-17T07:30:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:29:02Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NotoriousH2/qwen3-1.7b-base-MED-Instruct
|
NotoriousH2
| 2025-09-17T07:29:48Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T07:00:53Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
doyou2/Qwen3-1.7B-Base-MED-Instruct
|
doyou2
| 2025-09-17T07:29:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:28:48Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
b00l26/q-FrozenLake-v1-4x4-noSlippery
|
b00l26
| 2025-09-17T07:28:13Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-17T07:28:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="b00l26/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
souvikg544/HindiOCR-VLM
|
souvikg544
| 2025-09-17T07:28:07Z | 0 | 0 | null |
[
"en",
"base_model:stepfun-ai/GOT-OCR2_0",
"base_model:finetune:stepfun-ai/GOT-OCR2_0",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T07:24:55Z |
---
license: apache-2.0
language:
- en
base_model:
- stepfun-ai/GOT-OCR2_0
---
|
Punn1403/detr_finetuned_bccd
|
Punn1403
| 2025-09-17T07:27:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/conditional-detr-resnet-50",
"base_model:finetune:microsoft/conditional-detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-09-17T06:31:50Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/conditional-detr-resnet-50
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: detr_finetuned_bccd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_bccd
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5863
- Map: 0.5535
- Map 50: 0.823
- Map 75: 0.6013
- Map Small: -1.0
- Map Medium: 0.3472
- Map Large: 0.638
- Mar 1: 0.4031
- Mar 10: 0.6432
- Mar 100: 0.7115
- Mar Small: -1.0
- Mar Medium: 0.542
- Mar Large: 0.73
- Map Platelets: 0.3468
- Mar 100 Platelets: 0.5444
- Map Rbc: 0.5782
- Mar 100 Rbc: 0.75
- Map Wbc: 0.7356
- Mar 100 Wbc: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Platelets | Mar 100 Platelets | Map Rbc | Mar 100 Rbc | Map Wbc | Mar 100 Wbc |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-------------:|:-----------------:|:-------:|:-----------:|:-------:|:-----------:|
| No log | 1.0 | 26 | 0.9642 | 0.1078 | 0.177 | 0.1243 | -1.0 | 0.0 | 0.113 | 0.0193 | 0.1197 | 0.2165 | -1.0 | 0.0 | 0.2165 | 0.0 | 0.0 | 0.3235 | 0.6496 | 0.0 | 0.0 |
| No log | 2.0 | 52 | 0.9589 | 0.1277 | 0.2441 | 0.1207 | -1.0 | 0.0405 | 0.1148 | 0.0413 | 0.1585 | 0.2468 | -1.0 | 0.1101 | 0.2116 | 0.0391 | 0.1056 | 0.3441 | 0.6349 | 0.0 | 0.0 |
| No log | 3.0 | 78 | 0.8392 | 0.2048 | 0.363 | 0.2117 | -1.0 | 0.111 | 0.2378 | 0.1376 | 0.459 | 0.6004 | -1.0 | 0.4609 | 0.5277 | 0.1078 | 0.4514 | 0.405 | 0.6685 | 0.1018 | 0.6812 |
| No log | 4.0 | 104 | 0.7755 | 0.3901 | 0.6199 | 0.436 | -1.0 | 0.149 | 0.4846 | 0.3071 | 0.5539 | 0.6514 | -1.0 | 0.4754 | 0.6805 | 0.1471 | 0.4792 | 0.4421 | 0.6862 | 0.581 | 0.7887 |
| No log | 5.0 | 130 | 0.7384 | 0.4471 | 0.7188 | 0.488 | -1.0 | 0.1917 | 0.5099 | 0.3475 | 0.5664 | 0.6504 | -1.0 | 0.4507 | 0.7194 | 0.1887 | 0.4597 | 0.4747 | 0.694 | 0.678 | 0.7975 |
| No log | 6.0 | 156 | 0.7484 | 0.4525 | 0.7284 | 0.4858 | -1.0 | 0.2084 | 0.5322 | 0.3549 | 0.5697 | 0.6437 | -1.0 | 0.4449 | 0.7039 | 0.2053 | 0.4528 | 0.4713 | 0.6784 | 0.681 | 0.8 |
| No log | 7.0 | 182 | 0.7283 | 0.4352 | 0.7337 | 0.4859 | -1.0 | 0.1647 | 0.4529 | 0.3455 | 0.5486 | 0.6284 | -1.0 | 0.4 | 0.6178 | 0.1618 | 0.3986 | 0.4761 | 0.688 | 0.6677 | 0.7987 |
| No log | 8.0 | 208 | 0.7011 | 0.4873 | 0.7735 | 0.527 | -1.0 | 0.2589 | 0.5811 | 0.3658 | 0.6008 | 0.6773 | -1.0 | 0.5058 | 0.7181 | 0.2553 | 0.5111 | 0.5051 | 0.7121 | 0.7015 | 0.8087 |
| No log | 9.0 | 234 | 0.6765 | 0.482 | 0.757 | 0.5214 | -1.0 | 0.257 | 0.5759 | 0.3619 | 0.5998 | 0.6877 | -1.0 | 0.5507 | 0.7247 | 0.2518 | 0.5556 | 0.5037 | 0.7212 | 0.6903 | 0.7862 |
| No log | 10.0 | 260 | 0.6977 | 0.4837 | 0.7833 | 0.5141 | -1.0 | 0.2669 | 0.5117 | 0.373 | 0.5761 | 0.654 | -1.0 | 0.4377 | 0.6952 | 0.2618 | 0.4431 | 0.4919 | 0.7077 | 0.6973 | 0.8112 |
| No log | 11.0 | 286 | 0.6463 | 0.5015 | 0.7766 | 0.552 | -1.0 | 0.2568 | 0.5259 | 0.3789 | 0.5999 | 0.6817 | -1.0 | 0.5043 | 0.6271 | 0.2527 | 0.4972 | 0.54 | 0.7416 | 0.7118 | 0.8062 |
| No log | 12.0 | 312 | 0.6382 | 0.5109 | 0.7939 | 0.5519 | -1.0 | 0.275 | 0.565 | 0.3855 | 0.612 | 0.6902 | -1.0 | 0.5101 | 0.7083 | 0.2719 | 0.5125 | 0.5403 | 0.7307 | 0.7206 | 0.8275 |
| No log | 13.0 | 338 | 0.6360 | 0.504 | 0.7943 | 0.5412 | -1.0 | 0.2616 | 0.562 | 0.3762 | 0.6118 | 0.6923 | -1.0 | 0.5188 | 0.7076 | 0.2602 | 0.5208 | 0.5406 | 0.7311 | 0.7111 | 0.825 |
| No log | 14.0 | 364 | 0.6422 | 0.5205 | 0.8 | 0.5632 | -1.0 | 0.305 | 0.6082 | 0.3915 | 0.6112 | 0.6849 | -1.0 | 0.5058 | 0.715 | 0.3038 | 0.5097 | 0.5381 | 0.7212 | 0.7197 | 0.8238 |
| No log | 15.0 | 390 | 0.7001 | 0.4877 | 0.7964 | 0.5097 | -1.0 | 0.3002 | 0.546 | 0.3682 | 0.5841 | 0.6589 | -1.0 | 0.513 | 0.6866 | 0.2976 | 0.5167 | 0.4969 | 0.6761 | 0.6685 | 0.7837 |
| No log | 16.0 | 416 | 0.6330 | 0.5173 | 0.7955 | 0.5706 | -1.0 | 0.3087 | 0.572 | 0.3811 | 0.6138 | 0.6855 | -1.0 | 0.5304 | 0.6651 | 0.3025 | 0.5278 | 0.5454 | 0.7236 | 0.7038 | 0.805 |
| No log | 17.0 | 442 | 0.6013 | 0.5356 | 0.8156 | 0.5856 | -1.0 | 0.3084 | 0.6229 | 0.3936 | 0.6348 | 0.7128 | -1.0 | 0.5565 | 0.7586 | 0.3051 | 0.5625 | 0.5594 | 0.7396 | 0.7423 | 0.8363 |
| No log | 18.0 | 468 | 0.6173 | 0.5414 | 0.8115 | 0.5947 | -1.0 | 0.3382 | 0.6081 | 0.4042 | 0.6367 | 0.7073 | -1.0 | 0.5435 | 0.736 | 0.3342 | 0.5472 | 0.5528 | 0.7334 | 0.7371 | 0.8413 |
| No log | 19.0 | 494 | 0.5997 | 0.5368 | 0.8061 | 0.5793 | -1.0 | 0.3236 | 0.641 | 0.3981 | 0.6291 | 0.7038 | -1.0 | 0.5319 | 0.7575 | 0.3217 | 0.5389 | 0.564 | 0.7474 | 0.7246 | 0.825 |
| 0.7657 | 20.0 | 520 | 0.5929 | 0.5384 | 0.8159 | 0.5793 | -1.0 | 0.3278 | 0.6412 | 0.3973 | 0.6361 | 0.7103 | -1.0 | 0.5449 | 0.7386 | 0.3258 | 0.5486 | 0.5732 | 0.7524 | 0.716 | 0.83 |
| 0.7657 | 21.0 | 546 | 0.5932 | 0.5403 | 0.8167 | 0.5962 | -1.0 | 0.3445 | 0.6262 | 0.401 | 0.6382 | 0.7129 | -1.0 | 0.5623 | 0.7249 | 0.3424 | 0.5639 | 0.571 | 0.7485 | 0.7074 | 0.8263 |
| 0.7657 | 22.0 | 572 | 0.5931 | 0.5422 | 0.82 | 0.5838 | -1.0 | 0.3427 | 0.6289 | 0.3984 | 0.6363 | 0.7068 | -1.0 | 0.5362 | 0.7272 | 0.3403 | 0.5389 | 0.5722 | 0.7478 | 0.7141 | 0.8338 |
| 0.7657 | 23.0 | 598 | 0.5896 | 0.5454 | 0.8141 | 0.6003 | -1.0 | 0.3437 | 0.6427 | 0.4001 | 0.6333 | 0.7036 | -1.0 | 0.5304 | 0.7365 | 0.3421 | 0.5347 | 0.5739 | 0.7486 | 0.7202 | 0.8275 |
| 0.7657 | 24.0 | 624 | 0.5918 | 0.5473 | 0.8161 | 0.5912 | -1.0 | 0.3487 | 0.61 | 0.4027 | 0.6445 | 0.7132 | -1.0 | 0.5551 | 0.7275 | 0.3462 | 0.5569 | 0.5733 | 0.7464 | 0.7225 | 0.8363 |
| 0.7657 | 25.0 | 650 | 0.5874 | 0.5496 | 0.8117 | 0.6142 | -1.0 | 0.3503 | 0.6333 | 0.4026 | 0.6448 | 0.7145 | -1.0 | 0.558 | 0.7279 | 0.3494 | 0.5597 | 0.5751 | 0.7487 | 0.7244 | 0.835 |
| 0.7657 | 26.0 | 676 | 0.5887 | 0.5453 | 0.8162 | 0.6013 | -1.0 | 0.3375 | 0.6329 | 0.4004 | 0.6365 | 0.7085 | -1.0 | 0.5391 | 0.7279 | 0.3376 | 0.5417 | 0.5754 | 0.7476 | 0.723 | 0.8363 |
| 0.7657 | 27.0 | 702 | 0.5881 | 0.5518 | 0.8205 | 0.6036 | -1.0 | 0.3489 | 0.6358 | 0.4014 | 0.6438 | 0.7113 | -1.0 | 0.5478 | 0.728 | 0.3483 | 0.55 | 0.5777 | 0.7489 | 0.7293 | 0.835 |
| 0.7657 | 28.0 | 728 | 0.5865 | 0.5531 | 0.8225 | 0.6003 | -1.0 | 0.3494 | 0.6369 | 0.4022 | 0.6443 | 0.7122 | -1.0 | 0.5478 | 0.7289 | 0.3487 | 0.55 | 0.5784 | 0.7504 | 0.7321 | 0.8363 |
| 0.7657 | 29.0 | 754 | 0.5864 | 0.5532 | 0.8226 | 0.6009 | -1.0 | 0.3464 | 0.6379 | 0.4031 | 0.6431 | 0.7114 | -1.0 | 0.542 | 0.73 | 0.3462 | 0.5444 | 0.5779 | 0.7499 | 0.7356 | 0.84 |
| 0.7657 | 30.0 | 780 | 0.5863 | 0.5535 | 0.823 | 0.6013 | -1.0 | 0.3472 | 0.638 | 0.4031 | 0.6432 | 0.7115 | -1.0 | 0.542 | 0.73 | 0.3468 | 0.5444 | 0.5782 | 0.75 | 0.7356 | 0.84 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
phospho-app/ACT_BBOX-paper_pick-m66giqz04a
|
phospho-app
| 2025-09-17T07:26:11Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/paper_pick_bboxes",
"region:us"
] |
robotics
| 2025-09-17T06:58:11Z |
---
datasets: phospho-app/paper_pick_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [phospho-app/paper_pick_bboxes](https://huggingface.co/datasets/phospho-app/paper_pick_bboxes)
- **Wandb run id**: None
## This model was trained using **[🧪phospho](https://phospho.ai)**
Training was successful, try it out on your robot!
## Training parameters
```text
{
"batch_size": 100,
"steps": 10000,
"save_freq": 5000,
"target_detection_instruction": "a piece of white square paper",
"image_key": "main",
"image_keys_to_keep": []
}
```
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758093865
|
devivodowdlel
| 2025-09-17T07:25:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T07:25:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Hyeji0101/Llama-3.18b-ORPO-model
|
Hyeji0101
| 2025-09-17T07:25:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:23:09Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Hyeji0101
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
johnruth/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-omnivorous_fast_ape
|
johnruth
| 2025-09-17T07:24:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am omnivorous_fast_ape",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T20:09:23Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am omnivorous_fast_ape
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
israel/llama3-8b-eng
|
israel
| 2025-09-17T07:23:05Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-10T23:48:17Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winterfeb/gemma-270m-ko-en-Q8_0-GGUF
|
winterfeb
| 2025-09-17T07:23:02Z | 0 | 0 |
peft
|
[
"peft",
"gguf",
"base_model:adapter:unsloth/gemma-3-270m-it",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"base_model:winterfeb/gemma-270m-ko-en",
"base_model:adapter:winterfeb/gemma-270m-ko-en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-16T07:59:31Z |
---
base_model: winterfeb/gemma-270m-ko-en
library_name: peft
tags:
- base_model:adapter:unsloth/gemma-3-270m-it
- lora
- sft
- transformers
- trl
- unsloth
- llama-cpp
- gguf-my-repo
---
# winterfeb/gemma-270m-ko-en-Q8_0-GGUF
This model was converted to GGUF format from [`winterfeb/gemma-270m-ko-en`](https://huggingface.co/winterfeb/gemma-270m-ko-en) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/winterfeb/gemma-270m-ko-en) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo winterfeb/gemma-270m-ko-en-Q8_0-GGUF --hf-file gemma-270m-ko-en-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo winterfeb/gemma-270m-ko-en-Q8_0-GGUF --hf-file gemma-270m-ko-en-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo winterfeb/gemma-270m-ko-en-Q8_0-GGUF --hf-file gemma-270m-ko-en-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo winterfeb/gemma-270m-ko-en-Q8_0-GGUF --hf-file gemma-270m-ko-en-q8_0.gguf -c 2048
```
|
israel/llama3-8b-loc
|
israel
| 2025-09-17T07:18:32Z | 28 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-10T23:43:44Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758093245
|
devivodowdlel
| 2025-09-17T07:15:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T07:15:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wegohigh/Qwen3-1.7B-Base-MED-Instruct
|
wegohigh
| 2025-09-17T07:14:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:13:41Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
O2iginal/based-distill56l-dclm10b-s4096-step9-mamba_hy-2.9b-A0-hd64-ng6-msdim320-me1-bs1024-gpus8-sl32768
|
O2iginal
| 2025-09-17T07:14:34Z | 0 | 0 | null |
[
"yulanmini",
"hybrid",
"mamba",
"region:us"
] | null | 2025-09-17T04:52:50Z |
---
model_name: based-distill56l-dclm10b-s4096-step9-mamba_hy-2.9b-A0-hd64-ng6-msdim320-me1-bs1024-gpus8-sl32768
tags:
- yulanmini
- hybrid
- mamba
---
# based-distill56l-dclm10b-s4096-step9-mamba_hy-2.9b-A0-hd64-ng6-msdim320-me1-bs1024-gpus8-sl32768
This is a model uploaded from /mnt/nanjingcephfs/project_wx-rec-alg-bdc-exp/bwzheng/yulan/hyw/pretrain-linear-moe-dev/megatron_lm_workspace/checkpoint/based-distill56l-dclm10b-s512-step64-mamba_hybrid-2.9b-112layers-q30-kv6-hybrid0.0625-pattern_A0-mheaddim64-mnumgroups6-mstatedim320-mexpand1-freeze_false-ep1-mp2-pp1-cp1-lr2e-5-minlr7e-7-bs1024-gpus8-seqlen4096-loadyulan_attn_mamba.
|
Karthikappi0011/llama-3-1b-finetuned-mr-convo-Pra
|
Karthikappi0011
| 2025-09-17T07:14:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:14:18Z |
---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Karthikappi0011
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jo-mengr/mmcontext-pubmedbert-gs10k-cxg_geo_unfreeze_full
|
jo-mengr
| 2025-09-17T07:14:23Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:143054",
"loss:MultipleNegativesRankingLoss",
"code",
"dataset:jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation",
"dataset:jo-mengr/geo_70k_multiplets_natural_language_annotation",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:NeuML/pubmedbert-base-embeddings",
"base_model:finetune:NeuML/pubmedbert-base-embeddings",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-17T07:14:03Z |
---
language:
- code
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:143054
- loss:MultipleNegativesRankingLoss
base_model: NeuML/pubmedbert-base-embeddings
widget:
- source_sentence: sample_idx:SRX3675798
sentences:
- This measurement was conducted with Illumina HiSeq 2500. UM-UC18 bladder cancer
cell line, a type of urinary bladder cancer cell line, cultured for study of bladder
disease, cancer cell proliferation, and neoplasm.
- This measurement was conducted with Illumina HiSeq 2500. 15-year-old female patient
with osteosarcoma, a type of connective tissue disease affecting the bone. The
sample is from a U2OS cell line that has been transfected with siCTR, irradiated
with 4Gy (1Gy/min) and allowed to recover for 4 hours.
- sample_idx:SRX2405554
- source_sentence: sample_idx:SRX11176536
sentences:
- sample_idx:SRX2405554
- This measurement was conducted with Illumina HiSeq 2500. UM-UC18 bladder cancer
cell line, a type of urinary bladder cancer cell line, cultured for study of bladder
disease, cancer cell proliferation, and neoplasm.
- This measurement was conducted with NextSeq 500. A sample of cervical adenocarcinoma
cells (HeLa) that have been modified to be Tet-pLKO\_shGFP stable cells, with
no IP antibody or nucleotide alteration treatment. This cell line is often used
as a control for dox-inducible RNAi screens.
- source_sentence: sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_28820
sentences:
- This measurement was conducted with 10x 3' v2. This sample represents a CD8-positive,
alpha-beta T cell derived from a 29-year-old female of European descent with managed
systemic lupus erythematosus (SLE). The cell was obtained from peripheral blood
mononuclear cells (PBMCs) and exhibits transcriptional signatures associated with
SLE, including elevated expression of type 1 interferon-stimulated genes (ISGs)
in monocytes, reduced levels of naïve CD4+ T cells correlating with monocyte ISG
expression, and an expansion of repertoire-restricted cytotoxic GZMH+ CD8+ T cells.
- This measurement was conducted with 10x 3' v2. This sample is a CD4-positive,
alpha-beta T cell derived from a 20-year old Asian female with managed systemic
lupus erythematosus (SLE). It is a peripheral blood mononuclear cell.
- sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_21566
- source_sentence: sample_idx:census_74cff64f-9da9-4b2a-9b3b-8a04a1598040_8130
sentences:
- This measurement was conducted with 10x 3' v3. Macrophage cell type, derived from
decidua basalis tissue of an 8-9 post conception week (PCW) female fetus, analyzed
using 10X 3' single-nucleus RNA-seq technology.
- This measurement was conducted with 10x 3' v3. Endothelial cell sample taken from
the decidua basalis of a female fetus at 8 post conception weeks (PCW). The sample
was processed using 10X_3'_snRNA-seq technology on nucleus.
- sample_idx:census_74cff64f-9da9-4b2a-9b3b-8a04a1598040_5480
- source_sentence: sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_20229
sentences:
- This measurement was conducted with 10x 5' v1. Alveolar macrophages derived from
the lung tissue of a male individual in his sixties.
- sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_9460
- This measurement was conducted with 10x 3' v3. Terminally differentiated CD8+
T cells from the lung tissue of a male individual in his sixties.
datasets:
- jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
- jo-mengr/geo_70k_multiplets_natural_language_annotation
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on NeuML/pubmedbert-base-embeddings
results:
- task:
type: triplet
name: Triplet
dataset:
name: cellxgene pseudo bulk 100k multiplets natural language annotation cell
sentence 1 caption
type: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption
metrics:
- type: cosine_accuracy
value: 0.7946953773498535
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: geo 70k multiplets natural language annotation cell sentence 1 caption
type: geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption
metrics:
- type: cosine_accuracy
value: 0.8203158974647522
name: Cosine Accuracy
---
# SentenceTransformer based on NeuML/pubmedbert-base-embeddings
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) on the [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) and [geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation) datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) <!-- at revision d6eaca8254bc229f3ca42749a5510ae287eb3486 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation)
- [geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation)
- **Language:** code
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): MMContextEncoder(
(text_encoder): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0-11): 12 x BertLayer(
(attention): BertAttention(
(self): BertSdpaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(pooling): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(omics_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=10000, out_features=768, bias=True)
(1): BatchNorm1d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(omics_encoder): MiniOmicsModel(
(embeddings): Embedding(158967, 10000, padding_idx=0)
)
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-gs10k-cxg_geo_unfreeze_full")
# Run inference
sentences = [
'sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_20229',
"This measurement was conducted with 10x 3' v3. Terminally differentiated CD8+ T cells from the lung tissue of a male individual in his sixties.",
"This measurement was conducted with 10x 5' v1. Alveolar macrophages derived from the lung tissue of a male individual in his sixties.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.2315, 0.0701],
# [ 0.2315, 1.0000, -0.0258],
# [ 0.0701, -0.0258, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption` and `geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption | geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption |
|:--------------------|:------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|
| **cosine_accuracy** | **0.7947** | **0.8203** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [bca1860](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/bca18607118c15c49b72dbe736adc39b765b5b77)
* Size: 81,143 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.72 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 48.4 tokens</li><li>max: 159 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 45.96 tokens</li><li>max: 155 tokens</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.74 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------|
| <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_26009</code> | <code>This measurement was conducted with 10x 3' v2. A proliferating lymphocyte cell sample, obtained from a 34-year-old female Asian individual, derived from peripheral blood mononuclear cells.</code> | <code>This measurement was conducted with 10x 3' v2. CD8-positive, alpha-beta T cell derived from a 51-year old European female with managed systemic lupus erythematosus (SLE), obtained from blood tissue and enriched as a peripheral blood mononuclear cell.</code> | <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_38905</code> |
| <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_6333</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a cell from the omentum tissue, specifically an effector memory CD4-positive, alpha-beta T cell, from a female in her sixth decade.</code> | <code>This measurement was conducted with 10x 3' v3. A cell sample from the spleen, belonging to the naive thymus-derived CD4-positive, alpha-beta T cell category, specifically Tnaive/CM_CD4, and identified as Tcm/Naive helper T cells within the T cells group.</code> | <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_4412</code> |
| <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_271</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG).</code> | <code>This measurement was conducted with 10x 3' v3. Fibroblast cells from the thalamic complex, specifically from the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG) region, of a 42-year-old male.</code> | <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_585</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption
* Dataset: [geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation) at [a666078](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation/tree/a666078793474b047c67be410bdfb21629cea9a4)
* Size: 61,911 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 20 characters</li><li>mean: 20.29 characters</li><li>max: 21 characters</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 39.49 tokens</li><li>max: 188 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 37.59 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 20 characters</li><li>mean: 20.1 characters</li><li>max: 21 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:----------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>sample_idx:SRX083304</code> | <code>This measurement was conducted with Illumina HiSeq 2000. 5-day HeLa cell line with ELAVL1/HuR siRNA1 knockdown, 120 hours post-transfection.</code> | <code>This measurement was conducted with Illumina HiSeq 2000. BJ fibroblast cells in a proliferative stage, with polyA RNA subtype.</code> | <code>sample_idx:SRX105303</code> |
| <code>sample_idx:SRX105302</code> | <code>This measurement was conducted with Illumina HiSeq 2000. BJ fibroblast cells in a proliferative stage, with polyA RNA subtype.</code> | <code>This measurement was conducted with Illumina HiSeq 2000. 5-day HeLa cell line with ELAVL1/HuR siRNA1 knockdown, 120 hours post-transfection.</code> | <code>sample_idx:SRX105303</code> |
| <code>sample_idx:SRX105303</code> | <code>This measurement was conducted with Illumina HiSeq 2000. BJ fibroblast cells at a confluent growth stage, with polyA RNA subtype.</code> | <code>This measurement was conducted with Illumina HiSeq 2000. 5-day HeLa cell line with ELAVL1/HuR siRNA1 knockdown, 120 hours post-transfection.</code> | <code>sample_idx:SRX105302</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Datasets
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [bca1860](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/bca18607118c15c49b72dbe736adc39b765b5b77)
* Size: 9,011 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.73 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 47.49 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 48.98 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.74 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------|
| <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_490</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 43-year-old male of European ethnicity with a reported history of kidney cancer. The cell type is identified as a kidney collecting duct intercalated cell.</code> | <code>This measurement was conducted with 10x 3' v3. Epithelial cells derived from the cortex of a kidney of a 50-year old female European individual, preserved by cryopreservation.</code> | <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_280</code> |
| <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_269</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male cerebellum, specifically from the Cerebellar Vermis - CBV region, with European self-reported ethnicity, analyzed at the nucleus level.</code> | <code>This measurement was conducted with 10x 3' v3. Fibroblast cells derived from the cerebellum tissue of a 50-year-old male, specifically from the Cerebellum (CB) - Cerebellar Vermis - CBV dissection.</code> | <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_826</code> |
| <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_10258</code> | <code>This measurement was conducted with 10x 5' v1. Cell sample from the tonsil of a 9-year-old female with recurrent tonsillitis, characterized as a centroblast B cell with IGLC2, IGLV7-43, IGLJ3 immunoglobulin genes expressed.</code> | <code>This measurement was conducted with 10x 5' v1. This sample represents a tonsil germinal center B cell from a three-year-old human male with obstructive sleep apnea and recurrent tonsillitis. The study provides a comprehensive roadmap of human B cell maturation, including gene expression, antibody repertoires, and clonal sharing of B cell states at single-cell resolution, as well as memory B cell heterogeneity reflecting diverse functional and signaling states.</code> | <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_243</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption
* Dataset: [geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation) at [a666078](https://huggingface.co/datasets/jo-mengr/geo_70k_multiplets_natural_language_annotation/tree/a666078793474b047c67be410bdfb21629cea9a4)
* Size: 6,901 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 20 characters</li><li>mean: 21.35 characters</li><li>max: 22 characters</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 41.2 tokens</li><li>max: 210 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 44.09 tokens</li><li>max: 178 tokens</li></ul> | <ul><li>min: 21 characters</li><li>mean: 21.04 characters</li><li>max: 22 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:-----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------|
| <code>sample_idx:SRX2244363</code> | <code>This measurement was conducted with Illumina HiSeq 2000. 15-year-old male HepG2 immortalized cell line with hepatocellular carcinoma, transiently expressing shRNA targeting PKM2 for RNA-seq study.</code> | <code>This measurement was conducted with Illumina HiSeq 2000. 15-year-old male patient with hepatocellular carcinoma; HNRNPC knocked down via shRNA in HepG2 (immortalized cell line) for RNA-seq analysis.</code> | <code>sample_idx:SRX5457055</code> |
| <code>sample_idx:SRX3136447</code> | <code>This measurement was conducted with Illumina HiSeq 2000. 16-year-old female's T cells from a control group, stimulated with ag85 at timepoint 0, and primary cells.</code> | <code>This measurement was conducted with Illumina HiSeq 2000. 17-year-old male's monocytes stimulated with mTb, taken at 180 days post-stimulation, as part of the control group in a study.</code> | <code>sample_idx:SRX3137689</code> |
| <code>sample_idx:SRX2734845</code> | <code>This measurement was conducted with Illumina HiSeq 2500. UM-UC18 bladder cancer cell line, a type of urinary bladder cancer cell line, cultured for study of bladder disease, cancer cell proliferation, and neoplasm.</code> | <code>This measurement was conducted with NextSeq 500. HeLa cells with PARP knockdown treatment.</code> | <code>sample_idx:SRX3130770</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | cellxgene pseudo bulk 100k multiplets natural language annotation cell sentence 1 caption loss | geo 70k multiplets natural language annotation cell sentence 1 caption loss | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_caption_cosine_accuracy | geo_70k_multiplets_natural_language_annotation_cell_sentence_1_caption_cosine_accuracy |
|:------:|:----:|:-------------:|:----------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|
| 0.0894 | 100 | 10.7639 | 12.1774 | 9.1972 | 0.6169 | 0.6547 |
| 0.1789 | 200 | 5.365 | 5.3143 | 5.8626 | 0.7589 | 0.7567 |
| 0.2683 | 300 | 3.4097 | 17.6062 | 5.2913 | 0.7596 | 0.7796 |
| 0.3578 | 400 | 2.6587 | 19.4612 | 5.1527 | 0.6728 | 0.7790 |
| 0.4472 | 500 | 2.1889 | 17.7623 | 4.3802 | 0.8087 | 0.7834 |
| 0.5367 | 600 | 1.9257 | 16.8122 | 4.5606 | 0.7972 | 0.7831 |
| 0.6261 | 700 | 1.7498 | 21.3413 | 6.7445 | 0.7161 | 0.7970 |
| 0.7156 | 800 | 1.6591 | 12.2050 | 4.4883 | 0.8626 | 0.7970 |
| 0.8050 | 900 | 1.5382 | 18.6180 | 4.6217 | 0.7839 | 0.8042 |
| 0.8945 | 1000 | 1.4497 | 17.2506 | 4.5358 | 0.8103 | 0.8035 |
| 0.9839 | 1100 | 1.327 | 16.2250 | 4.3189 | 0.8156 | 0.7935 |
| 1.0733 | 1200 | 1.224 | 13.6467 | 3.6715 | 0.8354 | 0.7960 |
| 1.1628 | 1300 | 1.2647 | 20.0122 | 5.5145 | 0.7455 | 0.8097 |
| 1.2522 | 1400 | 1.1107 | 17.3946 | 4.6305 | 0.7857 | 0.8041 |
| 1.3417 | 1500 | 1.0898 | 19.5191 | 4.9533 | 0.7675 | 0.8077 |
| 1.4311 | 1600 | 1.0824 | 15.2463 | 3.5608 | 0.8453 | 0.8123 |
| 1.5206 | 1700 | 1.066 | 21.6958 | 6.8176 | 0.6782 | 0.8092 |
| 1.6100 | 1800 | 1.0403 | 15.7400 | 3.2886 | 0.8278 | 0.8086 |
| 1.6995 | 1900 | 1.0234 | 20.4926 | 5.9406 | 0.7571 | 0.8171 |
| 1.7889 | 2000 | 0.9213 | 18.7522 | 4.0613 | 0.7744 | 0.8137 |
| 1.8784 | 2100 | 0.9838 | 16.5925 | 3.3635 | 0.8403 | 0.8180 |
| 1.9678 | 2200 | 0.9882 | 20.0623 | 4.8042 | 0.7775 | 0.8142 |
| 2.0572 | 2300 | 0.8484 | 18.5910 | 3.7370 | 0.8012 | 0.8183 |
| 2.1467 | 2400 | 0.8987 | 17.4152 | 3.6346 | 0.8396 | 0.8137 |
| 2.2361 | 2500 | 0.8398 | 10.1287 | 2.7103 | 0.8977 | 0.8176 |
| 2.3256 | 2600 | 0.8344 | 18.6915 | 3.6748 | 0.8233 | 0.8163 |
| 2.4150 | 2700 | 0.8271 | 14.0092 | 3.2875 | 0.8635 | 0.8074 |
| 2.5045 | 2800 | 0.7619 | 14.7635 | 2.6413 | 0.8684 | 0.8174 |
| 2.5939 | 2900 | 0.8194 | 21.7793 | 6.8491 | 0.7331 | 0.8173 |
| 2.6834 | 3000 | 0.8216 | 21.2248 | 5.7135 | 0.7579 | 0.8200 |
| 2.7728 | 3100 | 0.8013 | 21.3951 | 5.9865 | 0.7441 | 0.8181 |
| 2.8623 | 3200 | 0.772 | 16.1781 | 2.8270 | 0.8606 | 0.8206 |
| 2.9517 | 3300 | 0.7405 | 15.4191 | 2.7028 | 0.8836 | 0.8244 |
| 3.0411 | 3400 | 0.747 | 16.9891 | 2.9837 | 0.8516 | 0.8209 |
| 3.1306 | 3500 | 0.7091 | 19.0563 | 3.8736 | 0.8181 | 0.8205 |
| 3.2200 | 3600 | 0.7153 | 18.1161 | 3.4335 | 0.8360 | 0.8135 |
| 3.3095 | 3700 | 0.7257 | 17.4649 | 3.1265 | 0.8548 | 0.8192 |
| 3.3989 | 3800 | 0.6759 | 16.6262 | 2.8549 | 0.8734 | 0.8189 |
| 3.4884 | 3900 | 0.6998 | 19.3925 | 3.9615 | 0.8140 | 0.8196 |
| 3.5778 | 4000 | 0.687 | 19.2583 | 4.0010 | 0.8257 | 0.8194 |
| 3.6673 | 4100 | 0.7207 | 19.8988 | 4.2895 | 0.8069 | 0.8193 |
| 3.7567 | 4200 | 0.6832 | 12.5116 | 2.1091 | 0.9041 | 0.8228 |
| 3.8462 | 4300 | 0.6806 | 16.1651 | 2.6662 | 0.8735 | 0.8202 |
| 3.9356 | 4400 | 0.695 | 20.4435 | 4.7955 | 0.7947 | 0.8203 |
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0.dev0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.9.0
- Datasets: 2.19.1
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
yalhessi/output_lemma_object_small_nodefs
|
yalhessi
| 2025-09-17T07:14:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-base",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-6.7b-base",
"region:us"
] |
text-generation
| 2025-09-02T10:40:51Z |
---
base_model: deepseek-ai/deepseek-coder-6.7b-base
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:deepseek-ai/deepseek-coder-6.7b-base
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
israel/llama3-8b-all
|
israel
| 2025-09-17T07:13:59Z | 28 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-10T23:39:09Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF
|
mradermacher
| 2025-09-17T07:13:09Z | 0 | 2 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:OmnicromsBrain/Nemonster_GrimoireDiabolic-12b",
"base_model:quantized:OmnicromsBrain/Nemonster_GrimoireDiabolic-12b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-17T05:17:16Z |
---
base_model: OmnicromsBrain/Nemonster_GrimoireDiabolic-12b
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/OmnicromsBrain/Nemonster_GrimoireDiabolic-12b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Nemonster_GrimoireDiabolic-12b-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nemonster_GrimoireDiabolic-12b-GGUF/resolve/main/Nemonster_GrimoireDiabolic-12b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
phospho-app/gr00t-paper_pick-rtg0oqc5i8
|
phospho-app
| 2025-09-17T07:12:44Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"gr00t_n1_5",
"gr00t",
"robotics",
"dataset:Hafnium49/paper_pick",
"region:us"
] |
robotics
| 2025-09-17T06:56:48Z |
---
datasets: Hafnium49/paper_pick
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t model - 🧪 phosphobot training pipeline
- **Dataset**: [Hafnium49/paper_pick](https://huggingface.co/datasets/Hafnium49/paper_pick)
- **Wandb run id**: None
## This model was trained using **[🧪phospho](https://phospho.ai)**
Training was successful, try it out on your robot!
## Training parameters
```text
{
"validation_dataset_name": null,
"batch_size": 27,
"num_epochs": 10,
"save_steps": 1000,
"learning_rate": 0.0001,
"data_dir": "/tmp/outputs/data",
"validation_data_dir": "/tmp/outputs/validation_data",
"output_dir": "/tmp/outputs/train"
}
```
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
tamewild/4b_v104_merged_e5
|
tamewild
| 2025-09-17T07:10:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:09:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
carolnc/gemma-3-finetune-270M-it-init-context
|
carolnc
| 2025-09-17T07:09:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:09:28Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomal66/gemma3-1b-sarcasm-sft
|
tomal66
| 2025-09-17T07:08:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-14T20:13:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luckeciano/Qwen-2.5-7B-GRPO-Base-KL-1.0-v2_9645
|
luckeciano
| 2025-09-17T07:08:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T01:25:12Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-KL-1.0-v2_9645
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-KL-1.0-v2_9645
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-KL-1.0-v2_9645", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/5nk1z8bh)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
NandhasuriyaS/qad_gpt_oss_fine_tuned
|
NandhasuriyaS
| 2025-09-17T07:08:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T07:05:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
itslabib/Qwen3-0.6B-Gensyn-Swarm-agile_bold_chinchilla
|
itslabib
| 2025-09-17T07:07:12Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am agile_bold_chinchilla",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T09:29:26Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am agile_bold_chinchilla
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Paradoxis/LeL2.0_3B
|
Paradoxis
| 2025-09-17T07:05:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"hf_jobs",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-02T09:06:49Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: LeL2.0_3B
tags:
- generated_from_trainer
- trl
- sft
- hf_jobs
licence: license
---
# Model Card for LeL2.0_3B
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Paradoxis/LeL2.0_3B", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/flofiz-universit-de-bourgogne/SFT/runs/7rqh1lfc)
This model was trained with SFT.
### Framework versions
- TRL: 0.24.0.dev0
- Transformers: 4.56.1
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
israel/gemma-2-9b-it-gsm8k-sft-translated
|
israel
| 2025-09-17T07:04:19Z | 65 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-25T11:58:23Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JobixAi/tts-pipeline-20250917_065706
|
JobixAi
| 2025-09-17T07:02:09Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-17T07:01:10Z |
This model finetunes the pretrained model `canopylabs/orpheus-3b-0.1-pretrained` using the finetuning pipeline. Full finetuning with Unsloth for 1 epochs.
**Finetune ID**: `68ea50b7-7686-4f62-896e-530dbd9d0145`
### Datasets
`JobixAi/mindy-higgs-metadata_1-20250917-061820`
`JobixAi/bob-higgs-metadata_2-20250917-065802`
### Inference
```bash
temperature = 0.7
top_p = 0.9
repetition_penalty = 1.1
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.