modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 12:29:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 523
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 12:28:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tranha/whisper-finetuned-v3_30e_augment_new | tranha | 2025-05-28T22:34:29Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-05-18T18:59:42Z | ---
library_name: transformers
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-finetuned-v3_30e_augment_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-finetuned-v3_30e_augment_new
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1077
- Wer: 52.3943
- Cer: 27.6678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0913 | 1.0 | 1950 | 0.0906 | 64.8321 | 31.4175 |
| 0.0616 | 2.0 | 3900 | 0.0758 | 59.5771 | 29.2888 |
| 0.0428 | 3.0 | 5850 | 0.0732 | 57.7425 | 29.0100 |
| 0.0251 | 4.0 | 7800 | 0.0747 | 56.9652 | 28.8937 |
| 0.0227 | 5.0 | 9750 | 0.0780 | 56.0634 | 28.7220 |
| 0.0162 | 6.0 | 11700 | 0.0777 | 54.6331 | 28.5171 |
| 0.012 | 7.0 | 13650 | 0.0786 | 56.3122 | 28.6925 |
| 0.011 | 8.0 | 15600 | 0.0838 | 55.6592 | 28.4728 |
| 0.0069 | 9.0 | 17550 | 0.0810 | 55.0995 | 28.6703 |
| 0.0076 | 10.0 | 19500 | 0.0918 | 56.0323 | 28.5171 |
| 0.0048 | 11.0 | 21450 | 0.0918 | 54.4776 | 28.5060 |
| 0.0033 | 12.0 | 23400 | 0.0947 | 53.5759 | 28.2679 |
| 0.0035 | 13.0 | 25350 | 0.0876 | 54.7575 | 28.3805 |
| 0.0041 | 14.0 | 27300 | 0.0936 | 53.9801 | 28.1995 |
| 0.0023 | 15.0 | 29250 | 0.0943 | 52.8607 | 28.1146 |
| 0.0023 | 16.0 | 31200 | 0.0942 | 53.3271 | 28.2365 |
| 0.0025 | 17.0 | 33150 | 0.0986 | 53.2649 | 28.1829 |
| 0.0014 | 18.0 | 35100 | 0.0973 | 52.4565 | 28.0371 |
| 0.0008 | 19.0 | 37050 | 0.0970 | 53.0162 | 27.9189 |
| 0.0014 | 20.0 | 39000 | 0.1054 | 53.0784 | 27.9448 |
| 0.0009 | 21.0 | 40950 | 0.1016 | 52.4565 | 27.8192 |
| 0.001 | 22.0 | 42900 | 0.0991 | 52.7674 | 27.9928 |
| 0.0003 | 23.0 | 44850 | 0.1039 | 51.9590 | 27.7398 |
| 0.0003 | 24.0 | 46800 | 0.1071 | 52.8918 | 27.8968 |
| 0.0003 | 25.0 | 48750 | 0.1044 | 52.5498 | 27.7287 |
| 0.0001 | 26.0 | 50700 | 0.1085 | 52.0833 | 27.7897 |
| 0.0001 | 27.0 | 52650 | 0.1060 | 52.3632 | 27.8211 |
| 0.0001 | 28.0 | 54600 | 0.1082 | 52.7052 | 27.7306 |
| 0.0001 | 29.0 | 56550 | 0.1071 | 52.5187 | 27.8008 |
| 0.0 | 30.0 | 58500 | 0.1077 | 52.3943 | 27.6678 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
while0628/student_model_data8000_epoch20 | while0628 | 2025-05-28T22:34:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T22:31:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
icefog72/Ice0.121-28.05-RP | icefog72 | 2025-05-28T22:31:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:53:01Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Ice0.121-28.05-RP
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* G:\FModels\Ice0.80-10.04-RP-GRPO
* G:\FModels\Ice0.115-10.05-RP
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: G:\FModels\Ice0.115-10.05-RP
layer_range: [0, 32]
- model: G:\FModels\Ice0.80-10.04-RP-GRPO
layer_range: [0, 32]
merge_method: slerp
base_model: G:\FModels\Ice0.80-10.04-RP-GRPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
chat_template: "alpaca"
```
|
jimmyliao/gemma_ft_unsloth | jimmyliao | 2025-05-28T22:28:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T22:28:39Z | ---
base_model: unsloth/gemma-2-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jimmyliao
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
achow3250/gabi | achow3250 | 2025-05-28T22:24:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T22:10:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Gabi
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/achow3250/gabi/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('achow3250/gabi', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/achow3250/gabi/discussions) to add images that show off what you’ve made with this LoRA.
|
BootesVoid/cmb8h5h500mnglexpp5l9la6w_cmb8hapht0mpwlexpe4st1uvf | BootesVoid | 2025-05-28T22:24:17Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T22:24:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NIKIVAKALI
---
# Cmb8H5H500Mnglexpp5L9La6W_Cmb8Hapht0Mpwlexpe4St1Uvf
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NIKIVAKALI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "NIKIVAKALI",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8h5h500mnglexpp5l9la6w_cmb8hapht0mpwlexpe4st1uvf/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8h5h500mnglexpp5l9la6w_cmb8hapht0mpwlexpe4st1uvf', weight_name='lora.safetensors')
image = pipeline('NIKIVAKALI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8h5h500mnglexpp5l9la6w_cmb8hapht0mpwlexpe4st1uvf/discussions) to add images that show off what you’ve made with this LoRA.
|
quickstep3621/dippy-g1-10-1 | quickstep3621 | 2025-05-28T22:22:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:17:06Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-IPO | AmberYifan | 2025-05-28T22:17:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T17:51:12Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-SPIN-gpt4o-IPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-IPO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-IPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/dii4dlra)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8h3xyv0mmjlexpabqqu6ir | BootesVoid | 2025-05-28T22:12:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T22:12:53Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cmb8Gn0Jk0Mfblexp52Lf6W5B_Cmb8H3Xyv0Mmjlexpabqqu6Ir
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8h3xyv0mmjlexpabqqu6ir/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8h3xyv0mmjlexpabqqu6ir', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8h3xyv0mmjlexpabqqu6ir/discussions) to add images that show off what you’ve made with this LoRA.
|
shallow6414/sn11-w3-21 | shallow6414 | 2025-05-28T22:11:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:08:05Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
devvevan/indo-sbert-finetuned-qa | devvevan | 2025-05-28T22:11:48Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1188",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-28T21:53:58Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1188
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Jika kamu (hai para musyrikin) mempunyai tipu daya, maka lakukanlah
tipu daya terhadap kami, dan sesungguhnya ayat ini benar-benar akan menghancurkan
orang-orang kafir pada hari kiamat."
sentences:
- Dan jika kamu dapat menghilangkan azab itu dan kamu dapat melepaskan diri dari
azab Allah, maka lakukanlah azab itu dan selamatkanlah dirimu dari azab Allah.
- Para istri ini memiliki mata yang gelap dan dengan bersahaja mengasingkan diri
di tenda - tenda mereka.
- (Apakah kamu menghendaki ketenteraman) nikmat-nikmat itu; lafal tasydid dapat
dibaca tahqiq dan dapat pula dibaca tasydid, yakni tanpa memakai tasydid (disebabkan
kamu dengki kepada Nabi Muhammad) hai Muhammad (dan pengikut-pengikutnya?) yakni
pengikut-pengikutnya. (Apakah kamu menghendaki) dapat dibaca tahqiq dan tashil
(bahwasanya Allah akan menolongnya) dapat dibaca tahqiq dan dapat pula dibaca
tashil (dari karunia-Nya) dari-Nya.
- source_sentence: Dan berapa banyaknya umat-umat yang telah Kami binasakan sebelum
mereka yang mereka itu lebih besar kekuatannya daripada mereka ini, maka apakah
mereka tidak mengadakan perjalanan di muka bumi lalu memperhatikan betapa kesudahan
orang-orang yang sebelum mereka.
sentences:
- (Sesungguhnya jika mereka menghalang-halangi) orang-orang kafir (dari jalan Allah
dan menghalang-halangi) manusia (dari jalan Allah, niscaya kami akan diusir-Nya)
dari rahmat-Nya (dan mereka menjadi cobaan) atau ujian (bagi kami) di akhirat
kelak.
- Allah Maha Kuasa atas segala sesuatu dan Dia Mahakuasa atas segala sesuatu. Dia
mengetahui segala sesuatu, baik yang tampak maupun yang tidak.
- Memang, kehancuran generasi - generasi yang lalu menjadi pelajaran bagi orang
yang memiliki hati yang memahami, atau telinga yang mendengarkan, yang berpikir
sekarang, tidak lalai dan tidak melalaikan.
- source_sentence: Sebutkan sifat-sifat Nabi Muhammad (Muhammad) dalam Taurat dan
Injil?
sentences:
- (Q.S. Al-Fath 11) Permohonan ampun Nabi Ibrahim buat bapaknya ini sebelum jelas
bagi Nabi Ibrahim, bahwa bapaknya itu adalah benar-benar musuh Allah, sebagaimana
yang telah dijelaskan di dalam surah Al-Bara'ah atau surah At-Taubah.
- Ke dalam tanah itu pula Allah mengembalikan mereka setelah mati untuk mengubur
jasad mereka. Dan dari tanah itu juga Allah mengeluarkan mereka hidup kembali
untuk dibangkitkan dan diberi balasan.
- (Telah diterangkan dalam Taurat dan Injil) yaitu syariat-syariat agama yang diturunkan
kepada anak-anak Nabi saw. (bahwa) sesungguhnya (kalian sekali-kali tidak boleh
menyembahnya) huruf min di sini adalah zaidah (selain daripada-Nya) selain dari
Allah swt. (dan Dia melarang dari apa yang mungkar) berupa kemusyrikan dan lain-lain
(dan bertakwalah kepada Allah) dalam hal-hal yang diharamkan-Nya kepada mereka.
(Sesungguhnya Allah mengeluarkan kepada mereka dari beban-beban) yakni beban-beban
berat (dan belenggu-belenggu) yang berat (yang ada pada mereka) seperti belenggu
dan belenggu-belenggu yang berat.
- source_sentence: (Mengapa Allah dan Rasul-Nya tidak mengazab mereka) dengan sesembahan-sesembahan
mereka (dengan sesembahan-sesembahan yang mereka jadikan) sesembahan mereka itu
(sebagai taqarrub) untuk mendekatkan diri kepada Allah (dan sebagai tuhan-tuhan)
di samping Allah. Huruf maa di sini bermakna alladziina.
sentences:
- (Yaitu orang-orang) menjadi badal atau na'at bagi orang-orang munafik (yang apabila
mereka bertemu dengan musuhmu) yakni orang-orang musyrik dan orang-orang mukmin
(hendaklah mereka bersabar) di dalam menghadapi penganiayaan mereka (dan selalu
berzikirlah kepada Allah) dengan mengucapkan kalimat, "Subhaanallaah wa bihamdihi"
(serta taatlah kepada Allah dan Rasul-Nya dan janganlah kalian berbantah-bantahan)
saling bersengketa di dalam masalah agama (dan bersabarlah) dalam menghadapi cobaan
(sesungguhnya Allah beserta orang-orang yang sabar) menghadapi cobaan dan cobaan-cobaan.
- Surat ini ditutup dengan pernyataan bahwa Allah telah memilih Muhammad saw. untuk
membawa pesan-pesan suci-Nya dan tidak menghendaki adanya pembangkangan orang
terhadap rasul yang diutus kepada mereka. Dia Maha Pengasih dan Maha Penyayang
terhadap mereka, dan cukuplah Allah sebagai penolong bagi Rasul apabila mereka
menolak ajakannya.]]
- (Mereka diberi minum dari khamar murni) atau khamar yang bersih dari kotoran (yang
dilak) tempat-tempatnya dan tidak pernah dibuka selain oleh mereka (dan dari khamar
murni) yaitu dari khamar (yang dilak) di dalam surga (mereka diberi minuman dengan
air yang baik) atau minuman yang tidak pernah kering (dan mereka memperoleh) di
dalamnya (tempat minum yang lezat).
- source_sentence: (Sesungguhnya telah datang kepada kalian seorang rasul dari kaum
kalian sendiri) dari kalangan kalian sendiri, yaitu Nabi Muhammad saw. (berat
terasa) dirasa berat (olehnya apa yang kalian derita) yaitu penderitaan kalian,
yang dimaksud ialah penderitaan dan musibah yang menimpa diri kalian (sangat menginginkan
bagi kalian) hidayah dan keselamatan (lagi terhadap orang-orang mukmin amat belas
kasihan) sangat belas kasihan (dan jika mereka berpaling) dari keimanan (maka
katakanlah, 'Aku bertawakkal kepada Allah) artinya takutlah (sesungguhnya Allah
Dialah Rabb Yang mempunyai Arasy) yakni Al-Kursi (Yang mempunyai Arasy) yakni
Arasy.
sentences:
- (Dan orang-orang yang berlaku zalim) di antara mereka (di muka bumi) dengan melakukan
perbuatan-perbuatan maksiat (dan membuat kerusakan di muka bumi) dengan melakukan
perbuatan-perbuatan maksiat (maka bagi mereka azab yang pedih) yang menyakitkan
(Dan bagi orang-orang yang sabar) di dalam menjalankan ketaatan (dan mereka amal
saleh) di dalam menjalankan ketaatan (siksaan yang besar) berupa pembalasan (dan
Allah menyediakan bagi mereka pahala yang besar) yakni surga.
- 'Dan mereka bertanya kepadamu tentang kisah gunung-gunung, maka katakanlah: "Tuhanku
akan menghancurkannya (di hari kiamat) sehancur-hancurnya, dan Dia akan menjadikan
(pula) di bumi itu gunung-gunung yang kokoh; tidak tetap sama sekali.'
- (Hai orang-orang yang beriman, sesungguhnya telah datang kepada kalian seorang
rasul dari kaum kalian sendiri) dari kalangan kalian sendiri, yaitu Nabi Muhammad
saw. (berat terasa) dirasa berat (olehnya apa yang kalian derita) yaitu penderitaan
kalian, yang dimaksud ialah penderitaan dan musibah yang menimpa diri kalian (sangat
menginginkan bagi kalian) hidayah dan keselamatan (lagi terhadap orang-orang mukmin
amat belas kasihan) sangat belas kasihan (dan penyayang terhadap orang-orang mukmin)
orang-orang kafir; mereka mengatakan dengan nada mengejek, ("Cukuplah Allah bagiku)
yakni Dia adalah Rabb (dan hanya kepada-Nya aku bertawakal) hanya kepada-Nya (dan
hanya kepada-Nya aku bertawakal) hanya kepada-Nyalah aku kembali.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("devvevan/indo-sbert-finetuned-qa")
# Run inference
sentences = [
"(Sesungguhnya telah datang kepada kalian seorang rasul dari kaum kalian sendiri) dari kalangan kalian sendiri, yaitu Nabi Muhammad saw. (berat terasa) dirasa berat (olehnya apa yang kalian derita) yaitu penderitaan kalian, yang dimaksud ialah penderitaan dan musibah yang menimpa diri kalian (sangat menginginkan bagi kalian) hidayah dan keselamatan (lagi terhadap orang-orang mukmin amat belas kasihan) sangat belas kasihan (dan jika mereka berpaling) dari keimanan (maka katakanlah, 'Aku bertawakkal kepada Allah) artinya takutlah (sesungguhnya Allah Dialah Rabb Yang mempunyai Arasy) yakni Al-Kursi (Yang mempunyai Arasy) yakni Arasy.",
'(Hai orang-orang yang beriman, sesungguhnya telah datang kepada kalian seorang rasul dari kaum kalian sendiri) dari kalangan kalian sendiri, yaitu Nabi Muhammad saw. (berat terasa) dirasa berat (olehnya apa yang kalian derita) yaitu penderitaan kalian, yang dimaksud ialah penderitaan dan musibah yang menimpa diri kalian (sangat menginginkan bagi kalian) hidayah dan keselamatan (lagi terhadap orang-orang mukmin amat belas kasihan) sangat belas kasihan (dan penyayang terhadap orang-orang mukmin) orang-orang kafir; mereka mengatakan dengan nada mengejek, ("Cukuplah Allah bagiku) yakni Dia adalah Rabb (dan hanya kepada-Nya aku bertawakal) hanya kepada-Nya (dan hanya kepada-Nya aku bertawakal) hanya kepada-Nyalah aku kembali.',
'(Dan orang-orang yang berlaku zalim) di antara mereka (di muka bumi) dengan melakukan perbuatan-perbuatan maksiat (dan membuat kerusakan di muka bumi) dengan melakukan perbuatan-perbuatan maksiat (maka bagi mereka azab yang pedih) yang menyakitkan (Dan bagi orang-orang yang sabar) di dalam menjalankan ketaatan (dan mereka amal saleh) di dalam menjalankan ketaatan (siksaan yang besar) berupa pembalasan (dan Allah menyediakan bagi mereka pahala yang besar) yakni surga.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,188 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 52.56 tokens</li><li>max: 448 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 77.71 tokens</li><li>max: 475 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Istifham atau kata tanya di sini mengandung makna Taqrir. Kemudian Allah menjelaskannya melalui firman berikutnya:</code> | <code>Allah telah menjanjikan kepada manusia bahwa mereka akan ditanya tentang apa yang mereka kerjakan.</code> |
| <code>Bacalah ayat-ayat al-Qur'ân yang menerangkan hukum-hukum syariat Allah setelah diikrarkan dan ditutur-Nya. Sucikanlah Allah dari segala sifat yang tidak layak bagi diri-Nya sejak dini hari hingga akhir malam.</code> | <code>(Dan pada sebagian malam hari bertasbihlah) pengertian bertasbih di sini adalah tasbih hakiki yaitu membaca, 'Subhaanallaah Wa bihamdihii' (dan pada bagian siang hari) lafal Idbaar adalah bentuk Mashdar, yakni lakukanlah salat subuh (dan pada waktu-waktu di waktu fajar) yakni mulai dari malam hari (bertasbih pulalah) yaitu salat Magrib dan salat Isyaklah kamu.</code> |
| <code>(Dan sesungguhnya) lafal qad di sini menunjukkan makna tahqiq (Kami mengetahui bahwa mereka berkata, "Sesungguhnya ia itu diajarkan kepadanya) yakni Alquran itu (oleh seorang manusia.") dimaksud adalah seorang pendeta Nasrani yang Nabi saw. pernah berkunjung kepadanya; lalu Allah swt. menyanggah melalui firman-Nya: (Padahal bahasa) atau logat (yang mereka tuduhkan) mereka sangkakan (kepada Muhammad) bahwa ia belajar daripadanya (adalah bahasa ajam sedangkan ini) yakni Alquran ini (adalah dalam bahasa Arab yang terang) memiliki kejelasan dan kefasihan, maka mengapa bahasa ini diajarkan oleh orang asing?</code> | <code>(Dan sesungguhnya) lafal qad di sini menunjukkan makna tahqiq (Kami mengetahui bahwa orang-orang yang kafir berkata, "Sesungguhnya Alquran ini) yakni Alquran ini (hanyalah seorang laki-laki dari Basra, seorang Nasrani dan seorang nabi) dimaksud adalah seorang Nasrani (yang akan masuk ke dalam hatinya") lalu Nabi saw. berkata kepadanya, ("Tidakkah kamu ketahui bahwa sesungguhnya Alquran ini) yakni Alquran itu (kecuali) asal kata lafal An-Nabiyah adalah An-Nabiyah, sedangkan lafal Al Ladziina adalah bentuk Takhfi dari Anna sedangkan Isimnya adalah lafal Al-A'anna (adalah suatu yang bersifat gaib) lafal asalnya adalah An Mubtada, maksudnya, sesuatu yang tidak diketahui oleh orang Arab (kecuali orang-orang Arab) yakni orang-orang Arab (yang tidak mengerti) maksudnya, mereka sama sekali tidak memahami Alquran itu.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 1.6835 | 500 | 0.6766 |
### Framework Versions
- Python: 3.12.4
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.8.0.dev20250318+cu128
- Accelerate: 1.5.2
- Datasets: 3.4.1
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8grc3u0mhrlexp7cfqfjyq | BootesVoid | 2025-05-28T22:09:09Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T22:09:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HOT
---
# Cmb8Gn0Jk0Mfblexp52Lf6W5B_Cmb8Grc3U0Mhrlexp7Cfqfjyq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HOT` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HOT",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8grc3u0mhrlexp7cfqfjyq/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8grc3u0mhrlexp7cfqfjyq', weight_name='lora.safetensors')
image = pipeline('HOT').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb8grc3u0mhrlexp7cfqfjyq/discussions) to add images that show off what you’ve made with this LoRA.
|
bowen118/s1-20250528_212248 | bowen118 | 2025-05-28T22:06:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:24:06Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: s1-20250528_212248
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for s1-20250528_212248
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bowen118/s1-20250528_212248", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bowen118-stanford-university/papertrace/runs/hcwzzvth)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bralynn/test1 | bralynn | 2025-05-28T22:05:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:50:51Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shishirahm3d/ai-lawyer-bd-1-8b-instruct-bnb-4bit | shishirahm3d | 2025-05-28T21:58:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T19:54:14Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shishirahm3d
- **License:** apache-2.0
|
ShivaVS/gemma-2-2B-it-thinking-function_calling-V0 | ShivaVS | 2025-05-28T21:58:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T21:55:40Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ShivaVS/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
FormlessAI/c86c6b31-c0cc-4956-bdf1-66a2f7e35c22 | FormlessAI | 2025-05-28T21:56:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:jingyeom/seal3.1.6n_7b",
"base_model:finetune:jingyeom/seal3.1.6n_7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T17:40:58Z | ---
base_model: jingyeom/seal3.1.6n_7b
library_name: transformers
model_name: c86c6b31-c0cc-4956-bdf1-66a2f7e35c22
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for c86c6b31-c0cc-4956-bdf1-66a2f7e35c22
This model is a fine-tuned version of [jingyeom/seal3.1.6n_7b](https://huggingface.co/jingyeom/seal3.1.6n_7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/c86c6b31-c0cc-4956-bdf1-66a2f7e35c22", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/nsx8j1za)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Moryjj/parst5_3blocks_4 | Moryjj | 2025-05-28T21:56:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-28T21:56:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/II-Tulu-8B-SFT-GGUF | mradermacher | 2025-05-28T21:54:03Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"axolotl",
"generated_from_trainer",
"en",
"dataset:allenai/tulu-3-sft-mixture",
"base_model:phunguyen01/Llama-Tulu-8B-SFT",
"base_model:quantized:phunguyen01/Llama-Tulu-8B-SFT",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-15T14:15:57Z | ---
base_model: phunguyen01/Llama-Tulu-8B-SFT
datasets:
- allenai/tulu-3-sft-mixture
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- axolotl
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/phunguyen01/Llama-Tulu-8B-SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/II-Tulu-8B-SFT-GGUF/resolve/main/II-Tulu-8B-SFT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cam-1000/MNLP_M2_rag_model | cam-1000 | 2025-05-28T21:52:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-18T21:00:48Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: MNLP_M2_mcqa_model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M2_mcqa_model2
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6677 | 1.0 | 4380 | 1.5840 |
| 1.6558 | 2.0 | 8760 | 1.5796 |
| 1.6602 | 3.0 | 13140 | 1.5785 |
| 1.6553 | 4.0 | 17520 | 1.5787 |
| 1.6479 | 5.0 | 21900 | 1.5787 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
hdong0/Qwen2.5-Math-1.5B-Open-R1-GRPO_deepscaler_100steps_lr1e-6_kl1e-3_acc | hdong0 | 2025-05-28T21:50:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:19:37Z | ---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: agentica-org/DeepScaleR-Preview-Dataset
library_name: transformers
model_name: Qwen2.5-Math-1.5B-Open-R1-GRPO_deepscaler_100steps_lr1e-6_kl1e-3_acc
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-Math-1.5B-Open-R1-GRPO_deepscaler_100steps_lr1e-6_kl1e-3_acc
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/Qwen2.5-Math-1.5B-Open-R1-GRPO_deepscaler_100steps_lr1e-6_kl1e-3_acc", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Ainxz/phi3.5-pucv | Ainxz | 2025-05-28T21:49:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T21:48:51Z | ---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ainxz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
while0628/student_model_epoch160 | while0628 | 2025-05-28T21:47:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:44:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
while0628/student_model_data8000_epoch14 | while0628 | 2025-05-28T21:45:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:43:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BKVNP/bart_lora_finetune | BKVNP | 2025-05-28T21:45:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:adapter:facebook/bart-base",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T17:20:45Z | ---
library_name: peft
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart_lora_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_lora_finetune
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8519
- Rouge1: 0.422
- Rouge2: 0.193
- Rougel: 0.2864
- Rougelsum: 0.3929
- Gen Len: 89.8971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.2909 | 0.2786 | 10000 | 1.9040 | 0.4197 | 0.1915 | 0.2803 | 0.39 | 95.844 |
| 2.2169 | 0.5573 | 20000 | 1.8850 | 0.4166 | 0.1886 | 0.2831 | 0.3876 | 86.0059 |
| 2.1929 | 0.8359 | 30000 | 1.8784 | 0.4213 | 0.192 | 0.2858 | 0.391 | 86.0011 |
| 2.1806 | 1.1145 | 40000 | 1.8755 | 0.419 | 0.1905 | 0.2829 | 0.3892 | 91.4016 |
| 2.1683 | 1.3931 | 50000 | 1.8673 | 0.4202 | 0.1916 | 0.2834 | 0.3916 | 94.264 |
| 2.164 | 1.6718 | 60000 | 1.8603 | 0.4214 | 0.1923 | 0.2846 | 0.3919 | 90.9084 |
| 2.1594 | 1.9504 | 70000 | 1.8519 | 0.422 | 0.193 | 0.2864 | 0.3929 | 89.8971 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1 |
alana-foto-viral/Original.18.alana.video.alana.foto.viral.alana.flores.foto.viral.x.alana.flores.telegram | alana-foto-viral | 2025-05-28T21:43:50Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:41:02Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?V=Alana-Flores">🌐 CLICK HERE 🟢==►► WATCH NOW</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?V=Alana-Flores">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?V=Alana-Flores"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
HPLT/hplt2c_slv_checkpoints | HPLT | 2025-05-28T21:35:44Z | 0 | 0 | null | [
"pytorch",
"llama",
"HPLT",
"decoder",
"sl",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T08:49:52Z | ---
language:
- sl
tags:
- HPLT
- decoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
---
# HPLT v2.0 - Cleaned - Slovenian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the decoder-only language models trained on [HPLT2.0_cleaned](https://huggingface.co/datasets/HPLT/HPLT2.0_cleaned).
All the HPLT decoder-only models use the same hyper-parameters, roughly following the llama architecture with 2.15B parameters in total:
- hidden size: 2048
- attention heads: 32
- layers: 24
- sequence length: 2048
## Intermediate checkpoints
We are releasing intermediate checkpoints for each model at intervals of every 1000 training steps in separate branches. The naming convention is `checkpoint_00xxxx00`: for example, `checkpoint_0005000`. The checkpoints range from checkpoint_0001000 to checkpoint_0047684 and the latter is in the main branch.
## Cite us
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
``` |
vertings6/72ba467c-5183-4665-b827-b9340b080c43 | vertings6 | 2025-05-28T21:30:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-28T14:01:02Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 72ba467c-5183-4665-b827-b9340b080c43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 12015d7c9ee7f3df_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vertings6/72ba467c-5183-4665-b827-b9340b080c43
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/12015d7c9ee7f3df_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0aa91fdd-f464-4c35-9e87-5ba2524c6ecc
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 0aa91fdd-f464-4c35-9e87-5ba2524c6ecc
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 72ba467c-5183-4665-b827-b9340b080c43
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2494 | 0.0000 | 1 | 1.2023 |
| 3.3585 | 0.0066 | 250 | 1.0108 |
| 2.5655 | 0.0132 | 500 | 0.9852 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
while0628/student_model_epoch140 | while0628 | 2025-05-28T21:27:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:24:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheGardener/KD-MLP-qwen2.5-0.41B-epoch-1st | TheGardener | 2025-05-28T21:22:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:22:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Cikgu-CCTV-Wiring-6-min/Cikgu.CCTV.Wiring.Fadhilah.Zainal.Full.6.Minutes.viral.hd.videos | Cikgu-CCTV-Wiring-6-min | 2025-05-28T21:19:46Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:17:16Z | [url=https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf]🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 ==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶[/url]
[url=https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf]🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐[/url]
[url=https://viralflix.xyz/?or]🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 ==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶[/url] |
19-lubna-qureshi-viral-video-highway-expre/original.news.18.lubna.qureshi.viral.video.highway.lubna.qureshi.and.manohar.lal.dhakad.bjp | 19-lubna-qureshi-viral-video-highway-expre | 2025-05-28T21:19:28Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:19:06Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nhu">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
|
morturr/Mistral-7B-v0.1-headlines-2025-05-28 | morturr | 2025-05-28T21:17:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T13:07:36Z | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-headlines-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-headlines-2025-05-28
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
BootesVoid/cmb8eq9tt0lk7lexpbos185t1_cmb8euonn0llnlexp9ouv9qk7 | BootesVoid | 2025-05-28T21:14:23Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T21:14:22Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: jane
---
# Cmb8Eq9Tt0Lk7Lexpbos185T1_Cmb8Euonn0Llnlexp9Ouv9Qk7
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `jane` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "jane",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8eq9tt0lk7lexpbos185t1_cmb8euonn0llnlexp9ouv9qk7/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8eq9tt0lk7lexpbos185t1_cmb8euonn0llnlexp9ouv9qk7', weight_name='lora.safetensors')
image = pipeline('jane').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8eq9tt0lk7lexpbos185t1_cmb8euonn0llnlexp9ouv9qk7/discussions) to add images that show off what you’ve made with this LoRA.
|
Iissaa878/Lara | Iissaa878 | 2025-05-28T21:13:20Z | 0 | 0 | null | [
"en",
"base_model:SG161222/RealVisXL_V5.0_Lightning",
"base_model:finetune:SG161222/RealVisXL_V5.0_Lightning",
"license:openrail",
"region:us"
]
| null | 2025-05-28T20:22:34Z | ---
license: openrail
language:
- en
base_model:
- SG161222/RealVisXL_V5.0_Lightning
--- |
highway-sex-manohar-lal-dhakad-viral-video/original.news.18.lubna.qureshi.viral.video.highway.lubna.qureshi.and.manohar.lal.dhakad.bjp.neta | highway-sex-manohar-lal-dhakad-viral-video | 2025-05-28T21:12:53Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T21:12:28Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nhu">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
|
Mubtakir/baserah_ai | Mubtakir | 2025-05-28T21:12:44Z | 0 | 0 | null | [
"ai",
"artificial intelligence",
"custom model",
"ar",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T21:10:09Z | ---
language:
- ar
tags:
- ai
- artificial intelligence
- custom model
license: apache-2.0
---
# Baserah AI Model
## نظرة عامة
هذا موديل ذكاء اصطناعي مبتكر تم تطويره من الصفر دون الاعتماد على الشبكات العصبية التقليدية أو مكتبات الذكاء الاصطناعي الموجودة.
## الخصائص
- 🚀 تقنية مبتكرة جديدة
- 🔧 لا يعتمد على مكتبات الذكاء الاصطناعي التقليدية
- 🌟 أداء محسّن
## المطور
تم تطويره بواسطة: Mubtakir
## الترخيص
MIT
|
yale-cultural-heritage/name-parser-model | yale-cultural-heritage | 2025-05-28T21:10:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:yale-cultural-heritage/name-parser-model",
"base_model:finetune:yale-cultural-heritage/name-parser-model",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-28T15:28:02Z | ---
library_name: transformers
license: apache-2.0
base_model: yale-cultural-heritage/name-parser-model
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: name-parser-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# name-parser-model
This model is a fine-tuned version of [yale-cultural-heritage/name-parser-model](https://huggingface.co/yale-cultural-heritage/name-parser-model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0332
- Accuracy: 0.9921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adafactor and the args are:
No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 0.041 | 3.1952 | 1000 | 0.0352 | 0.9912 |
| 0.0369 | 6.3904 | 2000 | 0.0345 | 0.9915 |
| 0.0358 | 9.5856 | 3000 | 0.0336 | 0.9917 |
| 0.0349 | 12.7808 | 4000 | 0.0333 | 0.9919 |
| 0.0337 | 15.9760 | 5000 | 0.0331 | 0.9920 |
| 0.0332 | 19.1696 | 6000 | 0.0334 | 0.9919 |
| 0.0328 | 22.3648 | 7000 | 0.0332 | 0.9921 |
| 0.0323 | 25.56 | 8000 | 0.0333 | 0.9921 |
| 0.0318 | 28.7552 | 9000 | 0.0333 | 0.9921 |
| 0.032 | 31.9504 | 10000 | 0.0332 | 0.9921 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
BootesVoid/cmb8e3aj20l8blexpn1wecn99_cmb8e91wp0lb3lexpwu5jdtqj | BootesVoid | 2025-05-28T21:09:20Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T21:09:19Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: riley
---
# Cmb8E3Aj20L8Blexpn1Wecn99_Cmb8E91Wp0Lb3Lexpwu5Jdtqj
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `riley` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "riley",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8e3aj20l8blexpn1wecn99_cmb8e91wp0lb3lexpwu5jdtqj/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8e3aj20l8blexpn1wecn99_cmb8e91wp0lb3lexpwu5jdtqj', weight_name='lora.safetensors')
image = pipeline('riley').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8e3aj20l8blexpn1wecn99_cmb8e91wp0lb3lexpwu5jdtqj/discussions) to add images that show off what you’ve made with this LoRA.
|
morturr/Llama-2-7b-hf-dadjokes-2025-05-28 | morturr | 2025-05-28T21:07:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2025-05-28T13:26:31Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-dadjokes-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-dadjokes-2025-05-28
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
winnieyangwannan/Llama-3.1-8B-Instruct_mlp-down_positive-negative-addition_last_layer_12_2_song_ratio_3_epoch_19 | winnieyangwannan | 2025-05-28T21:06:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T21:04:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
manas1111j/corgy_dog_LoRA | manas1111j | 2025-05-28T21:05:30Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-05-28T09:56:17Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK dog
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - manas1111j/corgy_dog_LoRA
<Gallery />
## Model description
These are manas1111j/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](manas1111j/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
morturr/Llama-2-7b-hf-headlines-2025-05-28 | morturr | 2025-05-28T21:05:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2025-05-28T13:34:47Z | ---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-headlines-2025-05-28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-headlines-2025-05-28
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
mohammed-orabi2/qwen-poetry-arabic-lora | mohammed-orabi2 | 2025-05-28T21:00:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
]
| null | 2025-05-28T20:41:48Z | ---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
## Model Card for Model ID
**Model ID:** mohammed-orabi2/qwen-poetry-lora2
---
## Model Details
**Model Description:**
This is a LoRA fine-tuned version of the `Qwen/Qwen3-1.7B` model, specifically trained to generate Arabic poetic responses in a conversational format. It was trained on a dataset of 1,000 synthetic Arabic poetry dialogues, each containing a user query and a poetic response.
**Developed by:** Mohammed Orabi
**Shared by :** mohammed-orabi2
**Model type:** Causal Language Model with LoRA adaptation
**Language(s) (NLP):** Arabic
**License:** Apache 2.0 (inherits from Qwen3-1.7B)
**Finetuned from model :** Qwen/Qwen3-1.7B
**Model Sources **
**Repository:** [https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2](https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2)
---
## Uses
**Direct Use:**
This model can be used for generating Arabic poetry in response to user queries, particularly in cultural, educational, or creative chatbot applications.
**Downstream Use :**
* Poetry recommendation systems
* Arabic literature generation tools
* Creative writing assistants
**Out-of-Scope Use:**
* Non-Arabic generation tasks
* Factual or knowledge-based QA tasks
* Sensitive or safety-critical environments
---
## Bias, Risks, and Limitations
The model was fine-tuned on synthetic poetic data and may:
* Favor specific poetic structures
* Fail on factual, political, or philosophical prompts
* Generate romantic or metaphorical content that could be misinterpreted in serious contexts
Users should avoid relying on this model for objective or critical outputs.
---
## Recommendations
Users (both direct and downstream) should be aware of the creative, poetic intent of this model. For factual content, use general-purpose LLMs. Evaluate outputs manually before publishing or broadcasting.
---
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-1.7B", device_map="auto", torch_dtype=torch.float16)
model = PeftModel.from_pretrained(base_model, "mohammed-orabi2/qwen-poetry-arabic-lora")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-1.7B")
prompt = "اكتب لي بيت شعر عن النجاح."
chat = [{"role": "user", "content": prompt}]
formatted_prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
---
## Training Details
**Training Data:**
1,000 synthetic Arabic poetic dialogues (prompt + poetic response) generated programmatically.
**Preprocessing :**
* Applied Qwen chat template
* Tokenized using Qwen3-1.7B tokenizer with padding/truncation
**Training Hyperparameters:**
* Epochs: 5
* Batch size: 2
* Max length: 1024
* Learning rate: 2e-4
* LoRA config: r=8, alpha=16, dropout=0.05, target: \["q\_proj", "v\_proj"]
**Speeds, Sizes, Times :**
* Training time: \~24 minutes on L4 GPU
* Model size: LoRA adapter \~100MB
---
## Evaluation
**Testing Data:**
50 reserved samples from the poetic dataset
**Factors:**
* Response fluency
* Arabic poetic structure
* Topical relevance
**Metrics:**
* Manual review (subjective)
* BLEU/Rouge not applicable
**Results:**
* 90% generated responses respected rhyme/meter and matched prompt topics
---
## Summary
**Model Examination \[optional]:**
Output behavior consistent with training intent. Performs well within poetic use-case boundaries.
---
## Environmental Impact
**Hardware Type:** NVIDIA L4
**Hours used:** \~0.4 hours (24 minutes)
**Cloud Provider:** Google Colab
**Compute Region:** US (GCP default)
**Carbon Emitted:** Estimated \~0.2 kg CO2e
---
## Technical Specifications
**Model Architecture and Objective:** Transformer decoder (CausalLM) + LoRA injection
**Compute Infrastructure:** Google Colab
**Hardware:** NVIDIA L4 (24 mins)
**Software:**
* Transformers 4.x
* PEFT 0.15.2
* Accelerate 0.25+
---
## Citation
**BibTeX:**
```bibtex
@misc{qwenpoetry2025,
author = {Mohammed Orabi},
title = {Qwen Arabic Poetry LoRA},
year = {2025},
howpublished = {\url{https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2}}
}
```
**APA:**
Mohammed Orabi. (2025). *Qwen Arabic Poetry LoRA* \[Model]. Hugging Face. [https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2](https://huggingface.co/mohammed-orabi2/qwen-poetry-lora2)
---
## Glossary
* **LoRA**: Low-Rank Adaptation, a method for efficient model fine-tuning
* **CausalLM**: Causal Language Modeling, predicts the next token in a sequence
---
## More Information
For support or feedback, please open an issue on the Hugging Face repo or contact via Hugging Face profile.
## Model Card Authors
Mohammed Orabi
## Model Card Contact
[https://huggingface.co/mohammed-orabi2](https://huggingface.co/mohammed-orabi2)
---
## Framework versions
* Transformers: 4.x
* PEFT: 0.15.2
* Datasets: latest
* Accelerate: 0.25+
|
sergioalves/f35545f9-f1b2-443c-abf5-ff4002b3c84e | sergioalves | 2025-05-28T20:59:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-28T19:57:39Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f35545f9-f1b2-443c-abf5-ff4002b3c84e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 72943e476c035738_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/f35545f9-f1b2-443c-abf5-ff4002b3c84e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/72943e476c035738_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 37d735c4-6f83-4c91-b2bd-93cfbef10805
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 37d735c4-6f83-4c91-b2bd-93cfbef10805
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# f35545f9-f1b2-443c-abf5-ff4002b3c84e
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7375 | 0.0000 | 1 | 1.9616 |
| 1.4873 | 0.0082 | 250 | 1.8633 |
| 1.4535 | 0.0163 | 500 | 1.8212 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
m1guelperez/gemma-2-9b-it-Q5_0-GGUF | m1guelperez | 2025-05-28T20:59:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"conversational",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:58:41Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
- llama-cpp
- gguf-my-repo
base_model: google/gemma-2-9b-it
---
# m1guelperez/gemma-2-9b-it-Q5_0-GGUF
This model was converted to GGUF format from [`google/gemma-2-9b-it`](https://huggingface.co/google/gemma-2-9b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-2-9b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo m1guelperez/gemma-2-9b-it-Q5_0-GGUF --hf-file gemma-2-9b-it-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo m1guelperez/gemma-2-9b-it-Q5_0-GGUF --hf-file gemma-2-9b-it-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo m1guelperez/gemma-2-9b-it-Q5_0-GGUF --hf-file gemma-2-9b-it-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo m1guelperez/gemma-2-9b-it-Q5_0-GGUF --hf-file gemma-2-9b-it-q5_0.gguf -c 2048
```
|
VIDEO-Alana-Flores-18/Original.Video.Leaked.Alana.Flores.Foto.Viral.X.Original.Video.Alana.Flores | VIDEO-Alana-Flores-18 | 2025-05-28T20:57:16Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T20:56:44Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nhu">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
cshuca/ppo-LunarLander-v2 | cshuca | 2025-05-28T20:56:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-28T20:56:38Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.00 +/- 14.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Varinder2110/be8b55ec-82bd-4c35-8f33-0320006872cd | Varinder2110 | 2025-05-28T20:54:22Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T19:49:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Be8B55Ec 82Bd 4C35 8F33 0320006872Cd
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/be8b55ec-82bd-4c35-8f33-0320006872cd/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/be8b55ec-82bd-4c35-8f33-0320006872cd', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/be8b55ec-82bd-4c35-8f33-0320006872cd/discussions) to add images that show off what you’ve made with this LoRA.
|
saujasv/correctness_and_cost-lc-False-contexts-hard-40117 | saujasv | 2025-05-28T20:53:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:google/gemma-3-12b-it",
"base_model:finetune:google/gemma-3-12b-it",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T03:01:19Z | ---
base_model: google/gemma-3-12b-it
library_name: transformers
model_name: correctness_and_cost-lc-False-contexts-hard-40117
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for correctness_and_cost-lc-False-contexts-hard-40117
This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="saujasv/correctness_and_cost-lc-False-contexts-hard-40117", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/saujasv/tangrams/runs/c3fivz69)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Alana-Flores-viral-video-link/Full.Video.Leaked.Alana.Flores.Foto.Viral.X.Original.Video.Alana.Flores | Alana-Flores-viral-video-link | 2025-05-28T20:50:57Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T20:50:28Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?nhu">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> |
SEMUNYU/AiBioTutor | SEMUNYU | 2025-05-28T20:50:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T20:50:50Z | ---
license: apache-2.0
---
|
mharsh1903/l2 | mharsh1903 | 2025-05-28T20:49:52Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-28T20:48:09Z | ---
license: apache-2.0
---
|
jciardo/fromcolab | jciardo | 2025-05-28T20:48:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:45:28Z | ---
base_model: Qwen/Qwen3-0.6B-Base
library_name: transformers
model_name: Base_Dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Base_Dpo
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jciardo/fromcolab", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cmb8dtb710l3hlexp2657xpuw_cmb8e2tcg0l85lexp3is9z387 | BootesVoid | 2025-05-28T20:45:57Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T20:45:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CARLA
---
# Cmb8Dtb710L3Hlexp2657Xpuw_Cmb8E2Tcg0L85Lexp3Is9Z387
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CARLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CARLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8dtb710l3hlexp2657xpuw_cmb8e2tcg0l85lexp3is9z387/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8dtb710l3hlexp2657xpuw_cmb8e2tcg0l85lexp3is9z387', weight_name='lora.safetensors')
image = pipeline('CARLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8dtb710l3hlexp2657xpuw_cmb8e2tcg0l85lexp3is9z387/discussions) to add images that show off what you’ve made with this LoRA.
|
HoangTran223/Llama-3.2-1B_SFT_for_DPO | HoangTran223 | 2025-05-28T20:44:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T20:39:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
filtrado-video-prohibido-18-viral-clip/full.vidoe.18.alana.video.alana.foto.viral.alana.flores.foto.viral.alana.flores.telegram | filtrado-video-prohibido-18-viral-clip | 2025-05-28T20:42:38Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T20:42:11Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?nhu">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> |
rohitkhan/rohit | rohitkhan | 2025-05-28T20:40:58Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
]
| null | 2025-05-28T20:40:58Z | ---
license: artistic-2.0
---
|
CodeAtCMU/Qwen3-1.7B_full_sft_code_data_120K | CodeAtCMU | 2025-05-28T20:40:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:39:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iagoalves/Qwen3-0.6B_bs4_lr2e-05_ep2_GRR | iagoalves | 2025-05-28T20:40:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:39:48Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alana-foto-viral-video/full.video.18.alana.video.alana.foto.viral.alana.flores.foto.viral.x.alana.flores.telegram | alana-foto-viral-video | 2025-05-28T20:36:22Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-28T20:35:14Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?nhu">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?nsu"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a> |
allenai/OLMo-2-0425-1B | allenai | 2025-05-28T20:35:45Z | 22,371 | 47 | transformers | [
"transformers",
"safetensors",
"olmo2",
"text-generation",
"en",
"arxiv:2501.00656",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-17T22:45:45Z | ---
license: apache-2.0
language:
- en
library_name: transformers
---
## Model Details
<img alt="OLMo Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmo2/olmo.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
# Model Card for OLMo 2 1B
We introduce OLMo 2 1B, the smallest model in the OLMo 2 family.
OLMo 2 was pre-trained on [OLMo-mix-1124](https://huggingface.co/datasets/allenai/olmo-mix-1124)
and uses [Dolmino-mix-1124](https://huggingface.co/datasets/allenai/dolmino-mix-1124) for mid-training.
OLMo 2 is the latest in a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
We have released all code, checkpoints, logs, and associated training details on [GitHub](https://github.com/allenai/OLMo).
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|--------|---------|-------------|-----------------|----------------|
| [OLMo 2-1B](https://huggingface.co/allenai/OLMo-2-0425-1B) | 4 Trillion | 16 | 2048 | 16 | 4096 |
| [OLMo 2-7B](https://huggingface.co/allenai/OLMo-2-1124-7B) | 4 Trillion | 32 | 4096 | 32 | 4096 |
| [OLMo 2-13B](https://huggingface.co/allenai/OLMo-2-1124-13B) | 5 Trillion | 40 | 5120 | 40 | 4096 |
| [OLMo 2-32B](https://huggingface.co/allenai/OLMo-2-0325-32B) | 6 Trillion | 64 | 5120 | 40 | 4096 |
The core models released in this batch include the following:
| **Stage** | **OLMo 2 1B** | **OLMo 2 7B** | **OLMo 2 13B** | **OLMo 2 32B** |
|------------------------|--------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
| **Base Model** | [allenai/OLMo-2-0425-1B](https://huggingface.co/allenai/OLMo-2-0425-1B) | [allenai/OLMo-2-1124-7B](https://huggingface.co/allenai/OLMo-2-1124-7B) | [allenai/OLMo-2-1124-13B](https://huggingface.co/allenai/OLMo-2-1124-13B) | [allenai/OLMo-2-0325-32B](https://huggingface.co/allenai/OLMo-2-0325-32B) |
| **SFT** | [allenai/OLMo-2-0425-1B-SFT](https://huggingface.co/allenai/OLMo-2-0425-1B-SFT) | [allenai/OLMo-2-1124-7B-SFT](https://huggingface.co/allenai/OLMo-2-1124-7B-SFT) | [allenai/OLMo-2-1124-13B-SFT](https://huggingface.co/allenai/OLMo-2-1124-13B-SFT) | [allenai/OLMo-2-0325-32B-SFT](https://huggingface.co/allenai/OLMo-2-0325-32B-SFT) |
| **DPO** | [allenai/OLMo-2-0425-1B-DPO](https://huggingface.co/allenai/OLMo-2-0425-1B-DPO) | [allenai/OLMo-2-1124-7B-DPO](https://huggingface.co/allenai/OLMo-2-1124-7B-DPO) | [allenai/OLMo-2-1124-13B-DPO](https://huggingface.co/allenai/OLMo-2-1124-13B-DPO) | [allenai/OLMo-2-0325-32B-DPO](https://huggingface.co/allenai/OLMo-2-0325-32B-DPO) |
| **Final Models (RLVR)**| [allenai/OLMo-2-0425-1B-Instruct](https://huggingface.co/allenai/OLMo-2-0425-1B-Instruct) | [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) | [allenai/OLMo-2-1124-13B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-13B-Instruct) | [allenai/OLMo-2-0325-32B-Instruct](https://huggingface.co/allenai/OLMo-2-0325-32B-Instruct) |
| **Reward Model (RM)** | | [allenai/OLMo-2-1124-7B-RM](https://huggingface.co/allenai/OLMo-2-1124-7B-RM) |(Same as 7B) | |
## Installation
OLMo 2 1B is supported in transformers v4.48 or higher:
```bash
pip install transformers>=4.48
```
If using vLLM, you will need to install from the main branch until v0.7.4 is released. Please
## Inference
You can use OLMo with the standard HuggingFace transformers library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-0425-1B")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# optional verifying cuda
# inputs = {k: v.to('cuda') for k,v in inputs.items()}
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is a key component of any text-based application, but its effectiveness...'
```
For faster performance, you can quantize the model using the following method:
```python
AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B",
torch_dtype=torch.float16,
load_in_8bit=True) # Requires bitsandbytes
```
The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:
```python
inputs.input_ids.to('cuda')
```
We have released checkpoints for these models. For pretraining, the naming convention is `stage1-stepXXX-tokensYYYB`. For checkpoints with ingredients of the soup, the naming convention is `stage2-ingredientN-stepXXX-tokensYYYB`
To load a specific model revision with HuggingFace, simply add the argument `revision`:
```bash
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0425-1B", revision="stage1-step140000-tokens294B")
```
Or, you can access all the revisions for the models via the following code snippet:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("allenai/OLMo-2-0425-1B")
branches = [b.name for b in out.branches]
```
### Fine-tuning
Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
1. Fine-tune with the OLMo repository:
```bash
torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
--data.paths=[{path_to_data}/input_ids.npy] \
--data.label_mask_paths=[{path_to_data}/label_mask.npy] \
--load_path={path_to_checkpoint} \
--reset_trainer_state
```
For more documentation, see the [GitHub README](https://github.com/allenai/OLMo/).
2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are [here](https://github.com/allenai/open-instruct).
### Model Description
- **Developed by:** Allen Institute for AI (Ai2)
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** Technical inquiries: `[email protected]`. Press: `[email protected]`
- **Date cutoff:** Dec. 2023.
### Model Sources
- **Project Page:** https://allenai.org/olmo
- **Repositories:**
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
- Evaluation code: https://github.com/allenai/OLMo-Eval
- Further fine-tuning code: https://github.com/allenai/open-instruct
- **Paper:** https://arxiv.org/abs/2501.00656
## Evaluation
Core model results for OLMo 2 1B are found below.
| Instruct Model | Avg | FLOP×10²³ | AE2 | BBH | DROP | GSM8K | IFE | MATH | MMLU | Safety | PQA | TQA |
|------------------------|------|-----------|------|------|------|-------|------|------|------|--------|------|------|
| **Closed API models** | | | | | | | | | | | | |
| GPT-3.5 Turbo 0125 | 60.5 | n/a | 38.7 | 66.6 | 70.2 | 74.3 | 66.9 | 41.2 | 70.2 | 69.1 | 45.0 | 62.9 |
| GPT 4o Mini 0724 | 65.7 | n/a | 49.7 | 65.9 | 36.3 | 83.0 | 83.5 | 67.9 | 82.2 | 84.9 | 39.0 | 64.8 |
| **Open weights models 1-1.7B Parameters** | | | | | | | | | | | | |
| SmolLM2 1.7B | 34.2 | 1.1 | 5.8 | 39.8 | 30.9 | 45.3 | 51.6 | 20.3 | 34.3 | 52.4 | 16.4 | 45.3 |
| Gemma 3 1B | 38.3 | 1.2 | 20.4 | 39.4 | 25.1 | 35.0 | 60.6 | 40.3 | 38.9 | 70.2 | 9.6 | 43.8 |
| Llama 3.1 1B | 39.3 | 6.7 | 10.1 | 40.2 | 32.2 | 45.4 | 54.0 | 21.6 | 46.7 | 87.2 | 13.8 | 41.5 |
| Qwen 2.5 1.5B | 41.7 | 1.7 | 7.4 | 45.8 | 13.4 | 66.2 | 44.2 | 40.6 | 59.7 | 77.6 | 15.5 | 46.5 |
| **Fully-open models** | | | | | | | | | | | | |
| OLMo 1B 0724 | 24.4 | 0.22 | 2.4 | 29.9 | 27.9 | 10.8 | 25.3 | 2.2 | 36.6 | 52.0 | 12.1 | 44.3 |
| **OLMo 2 1B** | 42.7 | 0.35 | 9.1 | 35.0 | 34.6 | 68.3 | 70.1 | 20.7 | 40.0 | 87.6 | 12.9 | 48.7 |
## Model Details
### Training
| | **OLMo 2 1B** | **OLMo 2 7B** | **OLMo 2 13B** | **OLMo 2 32B** |
|-------------------|------------|------------|------------|------------|
| Pretraining Stage 1 | 4 trillion tokens<br>(1 epoch) | 4 trillion tokens<br>(1 epoch) | 5 trillion tokens<br>(1.2 epochs) | 6 trillion tokens<br>(1.5 epochs) |
| Pretraining Stage 2 | 50B tokens | 50B tokens (3 runs)<br>*merged* | 100B tokens (3 runs)<br>300B tokens (1 run)<br>*merged* | 100B tokens (3 runs)<br>300B tokens (1 run)<br>*merged* |
| Post-training | SFT+DPO+GRPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-0425-1b-preference-mix)) | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix)) | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-1124-13b-preference-mix)) | SFT + DPO + GRPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-32b-pref-mix-v1)) |
#### Stage 1: Initial Pretraining
- Dataset: [OLMo-mix-1124](https://huggingface.co/datasets/allenai/olmo-mix-1124) (3.9T tokens)
- Coverage: 95%+ of total pretraining budget
- 1B Model: ~1 epoch
#### Stage 2: Mid-training
- Dataset: Dolmino-Mix-1124
- One training mix:
- 50B tokens
- Mix composition: 50% high-quality web data + academic/Q&A/instruction/math content
#### Model Merging
- 1B Model: only 1 version is trained on a 50B mix (ingredient 3), we did not merge. Ingredients 1 and 2 are just exploratory runs.
## Bias, Risks, and Limitations
Like any base or fine-tuned language model, AI can be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from OLMo or any LLM are often inaccurate, so facts should be verified.
## Citation
```
@misc{olmo20242olmo2furious,
title={{2 OLMo 2 Furious}},
author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
year={2024},
eprint={2501.00656},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.00656},
}
```
## Model Card Contact
For errors in this model card, contact `[email protected]`. |
RizhongLin/MNLP_M2_dpo_model_v1.5_768 | RizhongLin | 2025-05-28T20:32:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:27:45Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CodeAtCMU/Qwen3-0.6B-Base_full_sft_mixed_data_120K | CodeAtCMU | 2025-05-28T20:31:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:30:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
quickstep3621/dippy-g1-12-1 | quickstep3621 | 2025-05-28T20:30:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:04:58Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
CodeAtCMU/Qwen3-0.6B-Base_full_sft_natural_language_data_120K | CodeAtCMU | 2025-05-28T20:30:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:29:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
okayunn/billy-v1 | okayunn | 2025-05-28T20:30:13Z | 0 | 0 | null | [
"safetensors",
"mistral",
"unsloth",
"trl",
"sft",
"license:wtfpl",
"region:us"
]
| null | 2025-05-28T20:08:08Z | ---
license: wtfpl
tags:
- unsloth
- trl
- sft
---
|
manveertamber/cadet-embed-base-v1 | manveertamber | 2025-05-28T20:28:17Z | 339 | 0 | null | [
"safetensors",
"bert",
"sentence-similarity",
"en",
"arxiv:2505.19274",
"base_model:intfloat/e5-base-unsupervised",
"base_model:finetune:intfloat/e5-base-unsupervised",
"license:apache-2.0",
"region:us"
]
| sentence-similarity | 2025-05-11T19:36:29Z | ---
license: apache-2.0
language:
- en
base_model:
- intfloat/e5-base-unsupervised
pipeline_tag: sentence-similarity
---
# cadet-embed-base-v1
**cadet-embed-base-v1** is a BERT-base embedding model fine-tuned **from `intfloat/e5-base-unsupervised`** with
* **cross-encoder listwise distillation** (teachers: `RankT5-3B` and `BAAI/bge-reranker-v2.5-gemma2-lightweight`)
* **purely synthetic queries** (Llama-3.1 8B generated: questions, claims, titles, keywords, zero-shot & few-shot web queries) over 400k passages total from MSMARCO, DBPedia and Wikipedia corpora.
The result: highly effective BERT-base retrieval.
We provide our training code and scripts to generate synthetic queries at https://github.com/manveertamber/cadet-dense-retrieval.
---
## Quick start
```python
from sentence_transformers import SentenceTransformer
import numpy as np
model = SentenceTransformer("manveertamber/cadet-embed-base-v1")
query = "query: capital of France"
passages = [
"passage: Paris is the capital and largest city of France.",
"passage: Berlin is known for its vibrant art scene.",
"passage: The Eiffel Tower is located in Paris, France."
]
# Encode
q_emb = model.encode(query, normalize_embeddings=True)
p_embs = model.encode(passages, normalize_embeddings=True) # shape (n_passages, dim)
scores = np.dot(p_embs, q_emb) # shape (n_passages,)
# Rank passages by score
for passage, score in sorted(zip(passages, scores), key=lambda x: x[1], reverse=True):
print(f"{score:.3f}\t{passage}")
```
If you use this model, please cite:
```
@article{tamber2025conventionalcontrastivelearningfalls,
title={Conventional Contrastive Learning Often Falls Short: Improving Dense Retrieval with Cross-Encoder Listwise Distillation and Synthetic Data},
author={Manveer Singh Tamber and Suleman Kazi and Vivek Sourabh and Jimmy Lin},
journal={arXiv:2505.19274},
year={2025}
}
```
|
CodeAtCMU/Qwen3-0.6B_full_sft_code_data_120K | CodeAtCMU | 2025-05-28T20:27:54Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T21:08:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jfargus/donut_w9_base_v2 | jfargus | 2025-05-28T20:26:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T18:16:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
while0628/student_model_epoch80 | while0628 | 2025-05-28T20:26:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:23:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
magnifi/parser_user_v43b_epoch_6_lr_0p0018_awq | magnifi | 2025-05-28T20:24:59Z | 0 | 0 | null | [
"safetensors",
"mistral",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
]
| null | 2025-05-28T20:21:51Z | ---
license: apache-2.0
---
|
adriencleme/MNLP_M2_rag_model | adriencleme | 2025-05-28T20:22:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T20:58:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matyaydin/rag-example-masked-loss | matyaydin | 2025-05-28T20:22:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T18:22:38Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: rag-example-masked-loss
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rag-example-masked-loss
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.2.0
- Tokenizers 0.21.0
|
unrented5443/sn11-x2-15 | unrented5443 | 2025-05-28T20:21:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:21:28Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
unrented5443/sn11-x2-14 | unrented5443 | 2025-05-28T20:21:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:21:23Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
unrented5443/sn11-x2-13 | unrented5443 | 2025-05-28T20:21:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:21:19Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
milpu02/MiitoShidomix | milpu02 | 2025-05-28T20:20:29Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/eri-stellartint-v10-illustrious-sdxl",
"base_model:adapter:John6666/eri-stellartint-v10-illustrious-sdxl",
"region:us"
]
| text-to-image | 2025-05-28T20:20:16Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Screenshot 2025-05-28 084918.png
base_model: John6666/eri-stellartint-v10-illustrious-sdxl
instance_prompt: Miito Shido
---
# Miito Shidomix
<Gallery />
## Trigger words
You should use `Miito Shido` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/milpu02/MiitoShidomix/tree/main) them in the Files & versions tab.
|
samcomber/lunar-lander-torch-ppo | samcomber | 2025-05-28T20:20:13Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-28T20:20:07Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -186.16 +/- 101.42
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'samcomber/lunar-lander-torch-ppo'
'batch_size': 512
'minibatch_size': 128}
```
|
ngocnamk3er/t5-large-gen-retrieval-ruby-28-5-vast | ngocnamk3er | 2025-05-28T20:18:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-28T16:03:46Z | ---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-large
tags:
- generated_from_trainer
model-index:
- name: t5-large-gen-retrieval-ruby-28-5-vast
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-gen-retrieval-ruby-28-5-vast
This model is a fine-tuned version of [google-t5/t5-large](https://huggingface.co/google-t5/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3439 | 0.6423 | 1000 | 0.4290 |
| 0.3196 | 1.2845 | 2000 | 0.4222 |
| 0.3043 | 1.9268 | 3000 | 0.4308 |
| 0.2896 | 2.5690 | 4000 | 0.4219 |
| 0.2785 | 3.2113 | 5000 | 0.4181 |
| 0.2689 | 3.8536 | 6000 | 0.4201 |
| 0.2614 | 4.4958 | 7000 | 0.4183 |
| 0.2569 | 5.1381 | 8000 | 0.4197 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
quickstep3621/dippy-g1-8 | quickstep3621 | 2025-05-28T20:18:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:18:01Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
ServiceNow-AI/Apriel-5B-Instruct | ServiceNow-AI | 2025-05-28T20:17:26Z | 5,292 | 46 | transformers | [
"transformers",
"safetensors",
"apriel",
"text-generation",
"conversational",
"custom_code",
"en",
"base_model:ServiceNow-AI/Apriel-5B-Base",
"base_model:finetune:ServiceNow-AI/Apriel-5B-Base",
"license:mit",
"autotrain_compatible",
"region:us"
]
| text-generation | 2025-04-11T02:45:25Z | ---
base_model:
- ServiceNow-AI/Apriel-5B-Base
library_name: transformers
language:
- en
license: mit
---
# Apriel-5B
`/ˈɑː.pri.əl/`
## Table of Contents
1. [Model Summary](#model-summary)
2. [Evaluation](#evaluation)
3. [Intended Use](#intended-use)
4. [Limitations](#limitations)
5. [Security and Responsible Use](#security-and-responsible-use)
6. [License](#license)
7. [Citation](#citation)
## Model Summary
Apriel is a family of models built for versatility, offering high throughput and efficiency across a wide range of tasks.
### Apriel-5B-Base
Apriel-5B-base is a decoder-only transformer trained on 4.5T+ tokens of data. It is the first release in the Apriel model family, designed to support research on foundation models. Apriel-5B-base achieves strong performance across common benchmarks for models under 5B parameters.
### Apriel-5B-Instruct
[Apriel-5B-Instruct](https://huggingface.co/ServiceNow-AI/Apriel-5B-Instruct) is built on top of [Apriel-5B-base](https://huggingface.co/ServiceNow-AI/Apriel-5B-base) using continual pretraining (CPT), supervised finetuning (SFT), and post-training alignment with DPO and RLVR.
Both CPT and SFT stages involved training multiple domain-biased variants with overlapping datasets (e.g., instruction, code, math). These were then merged to form a more general-purpose model before alignment. The final model is aligned for instruction following, reasoning, and safety-aware dialogue.
<img src="https://huggingface.co/ServiceNow-AI/Apriel-4.8B-base/resolve/main/eval_vs_latency.png" alt="graph" width="400"/>
The y-axis shows average downstream benchmark scores. Throughput (x-axis) was measured using [vLLM](https://github.com/vllm-project/vllm) with batch size 8, 256 input tokens, and 32 output tokens.
### How to Use
```bash
pip install transformers
```
#### Running the Base model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "ServiceNow-AI/Apriel-5B-Base"
device = "cuda" # or "cpu"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device)
inputs = tokenizer.encode("Snow is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
```bash
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 9664.14 MB
```
#### Running the Instruct model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "ServiceNow-AI/Apriel-5B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
checkpoint,
torch_dtype=torch.bfloat16 if device == "cuda" else torch.float32
).to(device)
messages = [
{"role": "system", "content": "You are a helpful AI assistant that provides accurate and concise information."},
{"role": "user", "content": "Tell me about artificial intelligence"}
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(device)
generation_params = {
"max_new_tokens": 512,
"temperature": 0.2,
"top_p": 0.9,
"do_sample": True
}
outputs = model.generate(**inputs, **generation_params)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Chat Template
```
<|system|>
System message here (optional)
<|end|>
<|user|>
User message here
<|end|>
<|assistant|>
Assistant response here
<|end|>
```
If no system message is provided, the model inserts a blank system prompt to maintain format structure. The model supports structured interaction patterns, including tool calling and reasoning steps for more advanced workflows.
## Evaluation
Evaluations were conducted using [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [evalchemy](https://github.com/mlfoundations/evalchemy).
### Apriel-5B-Base
| Task Name | Apriel-5B-Base | OLMo-2-1124-7B | Llama-3.1-8B | Mistral-Nemo-Base-2407 |
|---------------------|------------------|----------------|--------------|-------------------------|
| **Average** | 58.7 | 58.71 | 61.72 | 66.01 |
| **ARC Challenge** | 56.7 | 62.7 | 58.2 | 62.9 |
| **ARC Easy** | 82.4 | 86.0 | 85.7 | 86.7 |
| **MMMLU** | 44.5 | 35.3 | 47.4 | 54.7 |
| **Global MMLU** | 57.4 | 52.4 | 61.1 | 68.4 |
| **GSM8k** | 64.2 | 63.2 | 54.8 | 58.5 |
| **HellaSwag** | 74.4 | 80.5 | 78.8 | 82.7 |
| **MUSR** | 39.1 | 39.6 | 38.0 | 39.9 |
| **MBPP** | 27.6 | 22.4 | 46.0 | 54.6 |
| **MMLU** | 61.3 | 63.9 | 66.0 | 69.6 |
| **PIQA** | 78.9 | 81.1 | 81.2 | 82.1 |
### Apriel-5B-Instruct
| Task Name | Apriel-5B-Instruct | OLMo-2-1124-7B-Instruct | Llama-3.1-8B-Instruct | Mistral-Nemo-Instruct-2407 |
|--------------|--------------------|--------------------------|------------------------|----------------------------|
| **Average** | 49.64 | 43.91 | 52.60 | 48.63 |
| **ARC Challenge** | 59.04 | 61.45 | 64.25 | 66.38 |
| **GSM8k** | 80.36 | 79.68 | 82.63 | 77.63 |
| **Hellaswag** | 74.52 | 80.21 | 78.43 | 81.71 |
| **BBH** | 39.82 | 39.95 | 50.86 | 50.06 |
| **GPQA** | 28.36 | 27.85 | 29.19 | 29.45 |
| **IF Eval** | 80.78 | 72.64 | 79.67 | 62.85 |
| **MMLU Pro** | 29.19 | 26.57 | 37.74 | 35.09 |
| **MUSR** | 36.77 | 34.39 | 38.36 | 39.02 |
| **MBPP** | 45.80 | 28.00 | 59.00 | 57.60 |
| **TruthfulQA** | 56.09 | 56.46 | 55.05 | 57.69 |
| **Winogrande** | 62.35 | 65.35 | 67.01 | 70.01 |
| **Minerva Math** | 39.80 | 9.96 | 36.72 | 21.46 |
| **MATH500** | 53.00 | 31.4 | 45.80 | 34.40 |
| **AMC23** | 29.00 | 16.4 | 21.00 | 11.50 |
| **MixEval Hard** | 29.70 | 28.40 | 43.30 | 34.60 |
## Intended Use
The Apriel family of models are designed for a variety of general-purpose instruction tasks, including:
- Question answering and information retrieval
- Content generation and summarization
- Code assistance and generation
- Logical reasoning and multi-step tasks
- Creative writing and ideation
They are **not intended** for use in safety-critical applications without human oversight or in scenarios requiring guaranteed factual accuracy.
## Limitations
- **Factual accuracy:** May produce incorrect, misleading, or outdated content. Outputs should be verified before use in critical contexts.
- **Bias:** May reflect societal, cultural, or systemic biases present in training data.
- **Ethics:** Do not use the model to produce harmful, unlawful, or unethical content.
- **Language:** Strongest performance is in English. Output quality may degrade in underrepresented languages.
- **Critical use:** Not suitable for medical, legal, financial, or other high-risk applications without safeguards.
## Security and Responsible Use
**Security Responsibilities:**
Deployers and users are strongly encouraged to align their security practices with established frameworks and regulatory guidelines such as the EU AI Act and the NIST AI Risk Management Framework (RMF).
**Guidelines for Deployers:**
- Regularly conduct robustness assessments to identify and mitigate adversarial inputs.
- Implement validation and filtering processes to prevent harmful or biased outputs.
- Continuously perform data privacy checks to guard against unintended data leaks.
- Document and communicate the model's limitations, intended usage, and known security risks to all end-users.
- Schedule periodic security reviews and updates to address emerging threats and vulnerabilities.
**Guidelines for Users:**
- Follow established security policies and usage guidelines provided by deployers.
- Protect and manage sensitive information when interacting with the model.
- Report anomalies, suspicious behavior, or unsafe outputs to deployers or developers.
- Maintain human oversight and apply judgment to mitigate potential security or ethical risks during interactions.
**Disclaimer:**
Users accept responsibility for securely deploying, managing, and using this open-source LLM. The model is provided "as-is," without explicit or implied warranty regarding security or fitness for any specific application or environment.
## Pretraining
### Model
- **Architecture:** Transformer decoder with grouped-query attention and YARN rotary embeddings
- **Tokens:** 4.5T
- **Precision:** bfloat16
- **Knowledge cutoff:** April 2024
### Hardware
- **Compute:** 480 × H100 GPUs
- **GPU-hours:** ~91,000 H100-hours
### Software
- **Training stack:** [Fast-LLM](https://github.com/ServiceNow/Fast-LLM)
## License
MIT
## Citation
```bibtex
@misc{Apriel-small-language-models,
author = {Slam labs team},
title = {{Apriel - a Family of performant small language models}},
howpublished = {https://huggingface.co/ServiceNow-AI/Apriel-5B-Instruct},
publisher = {SLAM - ServiceNow Language Models Lab}
year = {2025}
}
```
|
quickstep3621/dippy-g1-10 | quickstep3621 | 2025-05-28T20:17:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:17:06Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
vermoney/fe4b6d11-43a3-4ffd-a11f-d1e95215473d | vermoney | 2025-05-28T20:17:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-28T19:54:34Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fe4b6d11-43a3-4ffd-a11f-d1e95215473d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 72943e476c035738_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/fe4b6d11-43a3-4ffd-a11f-d1e95215473d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/72943e476c035738_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 37d735c4-6f83-4c91-b2bd-93cfbef10805
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 37d735c4-6f83-4c91-b2bd-93cfbef10805
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# fe4b6d11-43a3-4ffd-a11f-d1e95215473d
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3487 | 0.0069 | 280 | 1.7476 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
CodeAtCMU/Llama-3.1-8B_full_sft_code_data_120K | CodeAtCMU | 2025-05-28T20:16:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T20:12:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
UniLLMer/Trisgem2Kaa7bthefinalthirdQ | UniLLMer | 2025-05-28T20:13:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:teknium/Hermes-Trismegistus-Mistral-7B",
"base_model:quantized:teknium/Hermes-Trismegistus-Mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T20:02:06Z | ---
base_model: teknium/Hermes-Trismegistus-Mistral-7B
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** UniLLMer
- **License:** apache-2.0
- **Finetuned from model :** teknium/Hermes-Trismegistus-Mistral-7B
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
EmaRimoldi/MNLP_M2_rag_model | EmaRimoldi | 2025-05-28T20:11:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-25T08:43:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is a fine-tuned version of the base Qwen/Qwen3-0.6B-Base, trained on 100 data from mathQA.
- learning_rate = 5e-5
- per_device_train_batch_size = 1
- num_train_epochs = 1
- optimiser = adamw_torch
|
golf2248/sn11-v4-3 | golf2248 | 2025-05-28T20:09:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:09:45Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
Dagriffpatchfan/dirty_llama_8b | Dagriffpatchfan | 2025-05-28T20:09:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T20:09:09Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Dagriffpatchfan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shallow6414/sn11-3-21-3 | shallow6414 | 2025-05-28T20:08:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:08:05Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
jmpi5/poca-SoccerTwos | jmpi5 | 2025-05-28T20:08:03Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2025-05-28T20:07:56Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jmpi5/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
johngreendr1/7e5a194b-47bf-4b3b-a6a9-94230cda7941 | johngreendr1 | 2025-05-28T20:06:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
]
| null | 2025-05-28T16:15:25Z | ---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
quickstep3621/dippy-g1-21 | quickstep3621 | 2025-05-28T20:05:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-28T20:05:16Z | ---
base_model: google/gemma-3-27b-it
library_name: transformers
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: gemma
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64d1129297ca59bcf7458d07/zgFDl7UvWhiPYqdote7XT.png" width="400">
# Model Card for Synthia-S1-27b
**Community Page**: [Tesslate Community](https://discord.gg/DkzMzwBTaw), Website: [Tesslate](https://tesslate.com)
**Creative Writing Samples**: [Sample creative output](https://www.notion.so/Synthia-S1-Creative-Writing-Samples-1ca93ce17c2580c09397fa750d402e71)
**Authors**: Tesslate
## Model Information
### Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP use cases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
### KEY PARAMS TO RUN:
#### Creative Writing System Prompt:
```
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
```
#### Reasoning System Prompt:
```
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
```
#### Coding System Prompt:
```
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
```
Please use `temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0` with repeat penalty set to 1.3
OR (recommended)
`Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05` with a rolling window.
### Inputs and Outputs
* **Input:**
* Text prompts for questions, instructions, coding tasks, or summarizations
* Total input context of 128K tokens
* **Output:**
* Reasoned and structured text outputs
* Maximum output length of 8192 tokens
## Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B)
MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: [output](https://pastebin.com/kmcYzALq) (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
## Usage
Install the latest version of Transformers (>=4.50.0):
```Shell
pip install -U transformers
```
### Running with Pipeline API
```Python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
```
## Training Data
Synthia-S1-27b was trained on diverse data including:
* Multiple web documents
* Programming debugging and solutions
* Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
## Model Architecture
* **Base Model**: Gemma3
* **Size**: 27 billion parameters
* **Type**: Decoder-only Transformer
* **Precision**: bf16 with int8 quantization
* **Training Objective**: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
## Quantized Models
* [Synthia-S1-27b-Q4_K_M-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q4_K_M-GGUF)
* [Synthia-S1-27b-Q8_0-GGUF](https://huggingface.co/Tesslate/Synthia-S1-27b-Q8_0-GGUF)
## Limitations
* May require detailed prompt engineering for highly specific tasks
* Occasional hallucinations in less-explored domains
## Citation
```bibtex
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
```
**Developed by Tesslate** **[Huggingface](https://huggingface.co/tesslate)** **|** **[Website](https://tesslate.com)**
[Image Source](https://pixabay.com/illustrations/girl-backpack-night-surreal-sky-8257551/) |
namnguyenba2003/Vietnamese_Law_Embedding_finetuned_v2 | namnguyenba2003 | 2025-05-28T20:04:32Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:23168",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"vi",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:AITeamVN/Vietnamese_Embedding",
"base_model:finetune:AITeamVN/Vietnamese_Embedding",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-28T20:03:08Z | ---
language:
- vi
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:23168
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: AITeamVN/Vietnamese_Embedding
widget:
- source_sentence: 'Điều 3. Giải thích từ ngữ. Trong Thông tư này, các từ ngữ dưới
đây được hiểu như sau:
1. Đơn vị trực thuộc tổ chức tín dụng gồm chi nhánh, phòng giao dịch, văn phòng
đại diện của tổ chức tín dụng.
2. Thanh tra đơn vị trực thuộc tổ chức tín dụng là thanh tra một hoặc một số đơn
vị trực thuộc của tổ chức tín dụng bằng một kế hoạch tiến hành thanh tra và ban
hành một kết luận thanh tra đơn vị trực thuộc tổ chức tín dụng.
3. Thanh tra pháp nhân tổ chức tín dụng là thanh tra đồng thời trụ sở chính và
tất cả hoặc một số đơn vị trực thuộc của tổ chức tín dụng bằng một kế hoạch tiến
hành thanh tra và ban hành một kết luận thanh tra pháp nhân tổ chức tín dụng.
4. Thanh tra hợp nhất tổ chức tín dụng là thanh tra đồng thời pháp nhân tổ chức
tín dụng và tất cả hoặc một số pháp nhân tổ chức tín dụng là công ty con của tổ
chức tín dụng thuộc phạm vi quản lý nhà nước của Ngân hàng Nhà nước, bằng một
kế hoạch tiến hành thanh tra và ban hành một kết luận thanh tra hợp nhất tổ chức
tín dụng.
5. Thanh tra, giám sát ngành Ngân hàng là cơ quan thanh tra nhà nước, được tổ
chức thành hệ thống, gồm:'
sentences:
- Doanh nghiệp viễn thông được chia sẻ cơ sở hạ tầng kỹ thuật cần phải thông báo
cho doanh nghiệp chia sẻ những kế hoạch gì trước khi thực hiện?
- Cơ quan thanh tra, giám sát ngành Ngân hàng được tổ chức như thế nào theo quy
định pháp luật?
- Cơ quan nào có thẩm quyền quyết định thành lập trường năng khiếu thể thao, và
các bộ ngành nào có trách nhiệm quy định chương trình giảng dạy trong trường?
- source_sentence: 'c) Kho bạc Nhà nước cấp huyện: Báo cáo tổng hợp thông tin tài
chính huyện (bao gồm: Báo cáo tổng hợp thông tin tình hình tài chính huyện; Báo
cáo tổng hợp thông tin kết quả hoạt động tài chính huyện; Báo cáo tổng hợp thông
tin lưu chuyển tiền tệ huyện; Thuyết minh Báo cáo tổng hợp thông tin tài chính
huyện) được lập tương tự như Báo cáo tài chính nhà nước.
2. Các cơ quan, đơn vị, tổ chức quy định từ Khoản 4 đến Khoản 12 Điều 2 của Thông
tư này lập Báo cáo cung cấp thông tin tài chính gửi Kho bạc Nhà nước đồng cấp.
Cụ thể:
a) Cơ quan Thuế các cấp: Báo cáo được lập theo hướng dẫn tại Chế độ kế toán nghiệp
vụ thuế nội địa phản ánh các thông tin về: thu thuế, phí, lệ phí và thu nội địa
khác phát sinh trong năm báo cáo; tiền, phải thu, phải trả tại thời điểm kết thúc
năm báo cáo liên quan đến nghiệp vụ thu thuế và các khoản thu nội địa khác giao
cơ quan Thuế quản lý.'
sentences:
- Nếu tôi mua vé đi từ cảng A đến cảng B, nhưng khi lên phương tiện thì lại được
chở đến cảng C, thì tôi có quyền yêu cầu gì?
- Các đơn vị cấp huyện phải lập những báo cáo tài chính nào để tổng hợp thông tin
tài chính của huyện?
- Thuyền trưởng, người lái phương tiện chở khách có được phép chở những loại hàng
hóa nào cùng với hành khách?
- source_sentence: 'Điều 44. Báo cáo và thông tin cho cơ quan đại diện chủ sở hữu
1. DATC phải thực hiện công bố thông tin định kỳ và thông tin bất thường theo
quy định tại Điều 109, Điều 110 Luật Doanh nghiệp, Luật Quản lý, sử dụng vốn nhà
nước đầu tư vào sản xuất, kinh doanh tại doanh nghiệp và các văn bản hướng dẫn.
2. Người đại diện theo pháp luật của DATC quyết định và chịu trách nhiệm việc
công khai thông tin ra bên ngoài của Công ty. Bộ phận lưu giữ hồ sơ, tài liệu
của Công ty chỉ được cung cấp thông tin ra bên ngoài theo quyết định của người
đại diện theo pháp luật của Công ty hoặc người được người đại diện theo pháp luật
của Công ty ủy quyền.
3. Nội dung và nơi gửi thông tin thực hiện theo các quy định pháp luật có liên
quan.
4. Trường hợp có yêu cầu thanh tra, kiểm tra, kiểm toán của các cơ quan quản lý
nhà nước có thẩm quyền, người đại diện theo pháp luật của Công ty là người chịu
trách nhiệm trong tổ chức cung cấp thông tin theo đúng quy định pháp luật về thanh
tra, kiểm tra, kiểm toán.'
sentences:
- Người vận chuyển có được miễn trách nhiệm bồi thường thiệt hại đối với tính mạng,
sức khỏe của hành khách nếu thiệt hại xảy ra do lỗi của hành khách hoặc do tình
trạng sức khỏe của hành khách không?
- Khi đơn vị sự nghiệp công lập góp vốn liên doanh, liên kết, giá trị thương hiệu
của họ sẽ được phân bổ như thế nào?
- Ai là người quyết định và chịu trách nhiệm về việc công khai thông tin ra bên
ngoài của Công ty TNHH một thành viên Mua bán nợ Việt Nam?
- source_sentence: 'c) Khu vực công trường phải bố trí hợp lý, an toàn cao nhất về
khả năng chống thiên tai. Nhà xưởng, kho bãi chứa vật liệu phải bố trí ở nơi cao,
không ngập nước và phải được chằng buộc để không bị sập đổ khi gặp gió, bão;
d) Các trang thiết bị thi công phải được bảo quản an toàn, các phương tiện nổi
phải có âu giấu hoặc nơi khuất gió để neo giữ khi mưa bão;
đ) Phương án phòng ngừa thiên tai của nhà thầu thi công, nhà thầu quản lý, bảo
trì đường bộ phải gửi đến Ban Chỉ huy Phòng, chống thiên tai và Tìm kiếm cứu nạn
cấp huyện và cơ quan quản lý đường bộ để có sự chỉ đạo, phối hợp hiệu quả trong
phòng, chống thiên tai.
4. Phòng ngừa thiên tai trong quá trình thi công và hoàn thành công trình
a) Thi công công trình phải tuân thủ theo quy trình, bảo đảm an toàn giao thông
khi thi công các công trình liên quan đến đường bộ đang khai thác theo phương
án, biện pháp tổ chức thi công, kế hoạch phòng, chống thiên tai đã được duyệt;
b) Không vứt, bỏ vật liệu phế thải làm tắc nghẽn dòng chảy.'
sentences:
- Pháp luật quy định những yêu cầu gì về việc bảo quản các trang thiết bị thi công
và phương tiện nổi khi có mưa bão để tránh bị hư hỏng?
- Thông tư do Bộ Tài chính ban hành quy định những nguyên tắc gì trong việc xác
định giá trị tài sản là kết quả của nhiệm vụ khoa học và công nghệ sử dụng vốn
nhà nước?
- Vai trò và trách nhiệm của Hội đồng thẩm định của cơ quan quản lý thi hành án
hình sự thuộc Bộ Quốc phòng trong việc thẩm định hồ sơ, danh sách đề nghị giảm
thời hạn chấp hành án phạt tù là gì?
- source_sentence: 'Điều 36. Kiểm tra, giám sát an ninh đối với người, phương tiện,
đồ vật khi vào, ra và hoạt động tại khu vực hạn chế mà không phải hành khách,
hành lý, hàng hóa, bưu gửi và đồ vật đưa lên tàu bay
1. Nhân viên kiểm soát an ninh hàng không, nhân viên bảo vệ chịu trách nhiệm kiểm
tra, giám sát an ninh đối với người, phương tiện, đồ vật đưa vào, ra và hoạt động
tại khu vực hạn chế.
2. Người, phương tiện, đồ vật đưa vào khu vực hạn chế ngoại trừ đối tượng quy
định tại khoản 9 Điều này phải được kiểm tra an ninh hàng không các nội dung sau:
a) Thẻ, giấy phép kiểm soát an ninh hàng không;
b) Người, đồ vật mang theo người và giấy tờ cần thiết (nếu có);
c) Phương tiện và đồ vật trên phương tiện.
3. Người, phương tiện, đồ vật đưa ra ngoài khu vực hạn chế được lực lượng kiểm
soát an ninh hàng không kiểm tra khi có biểu hiện nghi ngờ như: trộm cắp tài sản,
buôn lậu và gian lận thương mại hoặc trong các trường hợp tăng cường bảo đảm an
ninh hàng không hoặc theo chỉ đạo của cơ quan quản lý nhà nước có thẩm quyền.'
sentences:
- Trong những trường hợp nào nhân viên kiểm soát an ninh hàng không được phép kiểm
tra người, phương tiện, đồ vật khi họ ra khỏi khu vực hạn chế của sân bay?
- Bộ Tài chính quy định những hình thức hỗ trợ cụ thể nào cho việc quản lý các công
trình vệ sinh công cộng và thu gom rác thải tại khu dân cư?
- Bộ Lao động - Thương binh và Xã hội quy định những trách nhiệm gì đối với Cục
An toàn lao động trong việc tổ chức tập huấn, thanh tra, kiểm tra và giám sát
hoạt động huấn luyện an toàn, vệ sinh lao động?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: Vietnamese_Embedding_finetuned
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 1024
type: dim_1024
metrics:
- type: cosine_accuracy@1
value: 0.738252427184466
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7918446601941748
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8702912621359223
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9316504854368932
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.738252427184466
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.6951456310679611
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.4871456310679612
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.26823300970873787
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.27407551240560946
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7351262135922331
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8430053937432579
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9230679611650484
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8368474490523952
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7850479272615185
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8136490188918367
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.7359223300970874
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7887378640776699
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8656310679611651
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9324271844660195
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7359223300970874
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.6924271844660194
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.48419417475728155
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.26827184466019416
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2732858683926645
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7325177993527509
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8381769147788566
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9233915857605179
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8352135435030674
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7825751271382327
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.811289502071293
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7343689320388349
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7906796116504854
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8683495145631068
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9297087378640777
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7343689320388349
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.6922977346278316
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.4862912621359223
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.2673009708737864
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.273007551240561
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7318964401294498
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8420992448759439
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9204207119741099
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8337008067206645
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7816617352442584
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8106497325723019
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7207766990291262
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7712621359223301
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8570873786407767
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9289320388349515
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7207766990291262
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.6777993527508092
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.47658252427184467
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.26710679611650484
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.26852858683926645
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.717631067961165
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8273031283710895
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9189449838187702
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8254453612559721
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7694959161658179
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7997607920288672
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.7106796116504854
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7619417475728155
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8442718446601942
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9114563106796116
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7106796116504854
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.6687378640776699
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.4699805825242719
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.2618640776699029
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2644509169363538
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7079870550161813
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8153419633225458
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9015339805825242
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.812058988182824
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7584647865618728
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7896458466261955
name: Cosine Map@100
---
# Vietnamese_Embedding_finetuned
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [AITeamVN/Vietnamese_Embedding](https://huggingface.co/AITeamVN/Vietnamese_Embedding) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [AITeamVN/Vietnamese_Embedding](https://huggingface.co/AITeamVN/Vietnamese_Embedding) <!-- at revision 9f671cc30908f1d851787efcc05b7d15bad8b615 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** vi
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("namnguyenba2003/Vietnamese_Law_Embedding_finetuned_v2")
# Run inference
sentences = [
'Điều 36. Kiểm tra, giám sát an ninh đối với người, phương tiện, đồ vật khi vào, ra và hoạt động tại khu vực hạn chế mà không phải hành khách, hành lý, hàng hóa, bưu gửi và đồ vật đưa lên tàu bay\n1. Nhân viên kiểm soát an ninh hàng không, nhân viên bảo vệ chịu trách nhiệm kiểm tra, giám sát an ninh đối với người, phương tiện, đồ vật đưa vào, ra và hoạt động tại khu vực hạn chế.\n2. Người, phương tiện, đồ vật đưa vào khu vực hạn chế ngoại trừ đối tượng quy định tại khoản 9 Điều này phải được kiểm tra an ninh hàng không các nội dung sau:\na) Thẻ, giấy phép kiểm soát an ninh hàng không;\nb) Người, đồ vật mang theo người và giấy tờ cần thiết (nếu có);\nc) Phương tiện và đồ vật trên phương tiện.\n3. Người, phương tiện, đồ vật đưa ra ngoài khu vực hạn chế được lực lượng kiểm soát an ninh hàng không kiểm tra khi có biểu hiện nghi ngờ như: trộm cắp tài sản, buôn lậu và gian lận thương mại hoặc trong các trường hợp tăng cường bảo đảm an ninh hàng không hoặc theo chỉ đạo của cơ quan quản lý nhà nước có thẩm quyền.',
'Trong những trường hợp nào nhân viên kiểm soát an ninh hàng không được phép kiểm tra người, phương tiện, đồ vật khi họ ra khỏi khu vực hạn chế của sân bay?',
'Bộ Tài chính quy định những hình thức hỗ trợ cụ thể nào cho việc quản lý các công trình vệ sinh công cộng và thu gom rác thải tại khu dân cư?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_1024`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 1024
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7383 |
| cosine_accuracy@3 | 0.7918 |
| cosine_accuracy@5 | 0.8703 |
| cosine_accuracy@10 | 0.9317 |
| cosine_precision@1 | 0.7383 |
| cosine_precision@3 | 0.6951 |
| cosine_precision@5 | 0.4871 |
| cosine_precision@10 | 0.2682 |
| cosine_recall@1 | 0.2741 |
| cosine_recall@3 | 0.7351 |
| cosine_recall@5 | 0.843 |
| cosine_recall@10 | 0.9231 |
| **cosine_ndcg@10** | **0.8368** |
| cosine_mrr@10 | 0.785 |
| cosine_map@100 | 0.8136 |
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7359 |
| cosine_accuracy@3 | 0.7887 |
| cosine_accuracy@5 | 0.8656 |
| cosine_accuracy@10 | 0.9324 |
| cosine_precision@1 | 0.7359 |
| cosine_precision@3 | 0.6924 |
| cosine_precision@5 | 0.4842 |
| cosine_precision@10 | 0.2683 |
| cosine_recall@1 | 0.2733 |
| cosine_recall@3 | 0.7325 |
| cosine_recall@5 | 0.8382 |
| cosine_recall@10 | 0.9234 |
| **cosine_ndcg@10** | **0.8352** |
| cosine_mrr@10 | 0.7826 |
| cosine_map@100 | 0.8113 |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7344 |
| cosine_accuracy@3 | 0.7907 |
| cosine_accuracy@5 | 0.8683 |
| cosine_accuracy@10 | 0.9297 |
| cosine_precision@1 | 0.7344 |
| cosine_precision@3 | 0.6923 |
| cosine_precision@5 | 0.4863 |
| cosine_precision@10 | 0.2673 |
| cosine_recall@1 | 0.273 |
| cosine_recall@3 | 0.7319 |
| cosine_recall@5 | 0.8421 |
| cosine_recall@10 | 0.9204 |
| **cosine_ndcg@10** | **0.8337** |
| cosine_mrr@10 | 0.7817 |
| cosine_map@100 | 0.8106 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7208 |
| cosine_accuracy@3 | 0.7713 |
| cosine_accuracy@5 | 0.8571 |
| cosine_accuracy@10 | 0.9289 |
| cosine_precision@1 | 0.7208 |
| cosine_precision@3 | 0.6778 |
| cosine_precision@5 | 0.4766 |
| cosine_precision@10 | 0.2671 |
| cosine_recall@1 | 0.2685 |
| cosine_recall@3 | 0.7176 |
| cosine_recall@5 | 0.8273 |
| cosine_recall@10 | 0.9189 |
| **cosine_ndcg@10** | **0.8254** |
| cosine_mrr@10 | 0.7695 |
| cosine_map@100 | 0.7998 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7107 |
| cosine_accuracy@3 | 0.7619 |
| cosine_accuracy@5 | 0.8443 |
| cosine_accuracy@10 | 0.9115 |
| cosine_precision@1 | 0.7107 |
| cosine_precision@3 | 0.6687 |
| cosine_precision@5 | 0.47 |
| cosine_precision@10 | 0.2619 |
| cosine_recall@1 | 0.2645 |
| cosine_recall@3 | 0.708 |
| cosine_recall@5 | 0.8153 |
| cosine_recall@10 | 0.9015 |
| **cosine_ndcg@10** | **0.8121** |
| cosine_mrr@10 | 0.7585 |
| cosine_map@100 | 0.7896 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 23,168 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:--------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 109 tokens</li><li>mean: 209.39 tokens</li><li>max: 313 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 37.15 tokens</li><li>max: 93 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Khoản 1. Hộ gia đình, cá nhân có trách nhiệm sau đây:<br>a) Giảm thiểu, phân loại chất thải rắn sinh hoạt tại nguồn, thu gom và chuyển rác thải sinh hoạt đã được phân loại đến đúng nơi quy định;<br>b) Giảm thiểu, xử lý và xả nước thải sinh hoạt đúng nơi quy định; không để vật nuôi gây mất vệ sinh trong khu dân cư;<br>c) Không phát tán khí thải, gây tiếng ồn, độ rung và tác động khác gây ô nhiễm môi trường, ảnh hưởng xấu đến cộng đồng dân cư xung quanh;<br>d) Chi trả kinh phí dịch vụ thu gom, vận chuyển và xử lý chất thải theo quy định của pháp luật;<br>đ) Tham gia hoạt động bảo vệ môi trường tại cộng đồng dân cư;<br>e) Có công trình vệ sinh theo quy định. Trường hợp chưa có công trình, thiết bị xử lý nước thải, khi xây dựng mới hoặc cải tạo, sửa chữa nhà ở riêng lẻ tại đô thị, khu dân cư tập trung, phải xây lắp công trình, thiết bị xử lý nước thải tại chỗ đáp ứng yêu cầu về bảo vệ môi trường theo quy định.</code> | <code>Luật Bảo vệ môi trường quy định những trách nhiệm gì đối với hộ gia đình, cá nhân trong việc quản lý chất thải rắn, xử lý nước thải và ngăn ngừa ô nhiễm môi trường?</code> |
| <code>Điều 34. Đối tượng chịu sự kiểm tra, kiểm soát tần số vô tuyến điện<br>1. Tổ chức, cá nhân sử dụng tần số và thiết bị vô tuyến điện trên lãnh thổ nước Cộng hòa xã hội chủ nghĩa Việt Nam phải chịu sự kiểm tra, kiểm soát tần số vô tuyến điện của cơ quan nhà nước có thẩm quyền.<br>2. Người trực tiếp khai thác thiết bị vô tuyến điện, thiết bị vô tuyến điện lắp đặt trên tàu biển, tàu bay của Việt Nam và của nước ngoài khi vào lãnh thổ nước Cộng hòa xã hội chủ nghĩa Việt Nam phải tuân theo quy định của pháp luật Việt Nam, thỏa thuận quốc tế, điều ước quốc tế mà Cộng hòa xã hội chủ nghĩa Việt Nam là thành viên và phải chịu sự kiểm tra, kiểm soát tần số vô tuyến điện của cơ quan nhà nước có thẩm quyền.</code> | <code>Cơ quan nhà nước nào có thẩm quyền thực hiện kiểm tra, kiểm soát tần số vô tuyến điện tại Việt Nam?</code> |
| <code>Điều 34. Điều kiện bảo đảm chất lượng hàng hóa nhập khẩu<br>1. Hàng hóa nhập khẩu phải được công bố tiêu chuẩn áp dụng theo quy định tại Điều 23 của Luật này và ghi nhãn theo quy định của pháp luật về nhãn hàng hóa.<br>2. Hàng hóa nhập khẩu thuộc nhóm 2 phải được công bố hợp quy, chứng nhận hợp quy theo quy chuẩn kỹ thuật tương ứng liên quan đến quá trình sản xuất, sản phẩm cuối cùng bởi tổ chức chứng nhận được chỉ định hoặc được thừa nhận theo quy định tại Điều 26 của Luật này.<br>3. Hàng hóa nhập khẩu thuộc nhóm 2 không đáp ứng quy định tại khoản 2 Điều này khi nhập khẩu phải được tổ chức giám định được chỉ định hoặc được thừa nhận theo quy định tại Điều 26 của Luật này giám định tại cửa khẩu xuất hoặc cửa khẩu nhập.<br>4. Hàng hóa nhập khẩu thuộc nhóm 2 phải được kiểm tra chất lượng khi nhập khẩu theo nội dung quy định tại khoản 2 Điều 27, trình tự, thủ tục quy định tại Điều 35 của Luật này.</code> | <code>Luật chất lượng sản phẩm, hàng hóa có quy định gì về việc kiểm tra chất lượng đối với hàng hóa nhập khẩu thuộc nhóm 2 khi nhập khẩu vào Việt Nam?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
768,
512,
256,
128
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `tf32`: True
- `dataloader_num_workers`: 8
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `ddp_find_unused_parameters`: False
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 8
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: False
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_1024_cosine_ndcg@10 | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 |
|:-------:|:-------:|:-------------:|:-----------------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|
| 0.2210 | 10 | 6.2144 | - | - | - | - | - |
| 0.4420 | 20 | 3.2176 | - | - | - | - | - |
| 0.6630 | 30 | 2.2397 | - | - | - | - | - |
| 0.8840 | 40 | 2.0719 | - | - | - | - | - |
| 1.0 | 46 | - | 0.8246 | 0.8218 | 0.8229 | 0.8114 | 0.7964 |
| 1.0884 | 50 | 1.558 | - | - | - | - | - |
| 1.3094 | 60 | 1.2167 | - | - | - | - | - |
| 1.5304 | 70 | 1.2956 | - | - | - | - | - |
| 1.7514 | 80 | 1.2846 | - | - | - | - | - |
| 1.9724 | 90 | 1.4133 | - | - | - | - | - |
| 2.0 | 92 | - | 0.8354 | 0.8349 | 0.8325 | 0.8187 | 0.8040 |
| 2.1768 | 100 | 0.9511 | - | - | - | - | - |
| 2.3978 | 110 | 1.0338 | - | - | - | - | - |
| 2.6188 | 120 | 1.0112 | - | - | - | - | - |
| 2.8398 | 130 | 0.8881 | - | - | - | - | - |
| 3.0 | 138 | - | 0.8325 | 0.8309 | 0.8302 | 0.8235 | 0.8108 |
| 3.0442 | 140 | 0.858 | - | - | - | - | - |
| 3.2652 | 150 | 0.7956 | - | - | - | - | - |
| 3.4862 | 160 | 0.8804 | - | - | - | - | - |
| 3.7072 | 170 | 0.9071 | - | - | - | - | - |
| 3.9282 | 180 | 0.9237 | - | - | - | - | - |
| 4.0 | 184 | - | 0.8371 | 0.8352 | 0.8327 | 0.8245 | 0.8103 |
| 4.1326 | 190 | 0.7106 | - | - | - | - | - |
| 4.3536 | 200 | 0.7552 | - | - | - | - | - |
| 4.5746 | 210 | 0.9339 | - | - | - | - | - |
| 4.7956 | 220 | 0.776 | - | - | - | - | - |
| **5.0** | **230** | **0.8253** | **0.8368** | **0.8352** | **0.8337** | **0.8254** | **0.8121** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.52.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Homepagee/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_scurrying_mongoose | Homepagee | 2025-05-28T20:04:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pensive scurrying mongoose",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-16T07:17:12Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_scurrying_mongoose
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pensive scurrying mongoose
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_scurrying_mongoose
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Homepagee/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-pensive_scurrying_mongoose", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
akilwade/thumbnails | akilwade | 2025-05-28T20:03:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-28T19:48:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: thumbz
---
# Thumbnails
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `thumbz` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "thumbz",
"lora_weights": "https://huggingface.co/akilwade/thumbnails/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('akilwade/thumbnails', weight_name='lora.safetensors')
image = pipeline('thumbz').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/akilwade/thumbnails/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF | mradermacher | 2025-05-28T20:03:29Z | 58 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cot",
"deepseek",
"Llama 3.2",
"128k context",
"fine tune",
"llama-3",
"llama-3.2",
"en",
"base_model:DavidAU/Deep-Reasoning-Llama-3.2-COT-3B",
"base_model:quantized:DavidAU/Deep-Reasoning-Llama-3.2-COT-3B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-17T17:39:01Z | ---
base_model: DavidAU/Deep-Reasoning-Llama-3.2-COT-3B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cot
- deepseek
- Llama 3.2
- 128k context
- fine tune
- llama-3
- llama-3.2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Deep-Reasoning-Llama-3.2-COT-3B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-COT-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-COT-3B.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF | mradermacher | 2025-05-28T20:03:07Z | 38 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cot",
"llama-3",
"llama-3.2",
"deepseek",
"Llama 3.2",
"128k context",
"fine tune",
"en",
"base_model:DavidAU/Deep-Reasoning-Llama-3.2-Overthinker-3B",
"base_model:quantized:DavidAU/Deep-Reasoning-Llama-3.2-Overthinker-3B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-17T18:09:09Z | ---
base_model: DavidAU/Deep-Reasoning-Llama-3.2-Overthinker-3B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cot
- llama-3
- llama-3.2
- deepseek
- Llama 3.2
- 128k context
- fine tune
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Deep-Reasoning-Llama-3.2-Overthinker-3B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Overthinker-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Overthinker-3B.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF | mradermacher | 2025-05-28T20:02:37Z | 178 | 1 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cot",
"deepseek",
"Llama 3.2",
"128k context",
"fine tune",
"llama-3",
"llama-3.2",
"en",
"base_model:DavidAU/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B",
"base_model:quantized:DavidAU/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-02-17T19:03:32Z | ---
base_model: DavidAU/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- reasoning
- thinking
- cot
- deepseek
- Llama 3.2
- 128k context
- fine tune
- llama-3
- llama-3.2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B-GGUF/resolve/main/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
FormlessAI/7ec06d02-ce6e-460f-bd54-4897c013198a | FormlessAI | 2025-05-28T20:02:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/llama-3-8b",
"base_model:finetune:unsloth/llama-3-8b",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T17:38:46Z | ---
base_model: unsloth/llama-3-8b
library_name: transformers
model_name: 7ec06d02-ce6e-460f-bd54-4897c013198a
tags:
- generated_from_trainer
- trl
- grpo
- unsloth
licence: license
---
# Model Card for 7ec06d02-ce6e-460f-bd54-4897c013198a
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/7ec06d02-ce6e-460f-bd54-4897c013198a", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/up1pi5a2)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits