modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 06:27:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Hungkolp/Yeiwl | Hungkolp | 2025-05-29T08:29:18Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2025-05-29T08:29:18Z | ---
license: bigscience-openrail-m
---
|
CultriX/Qwen3-8B-Hippocratesv1-GGUF | CultriX | 2025-05-29T08:12:10Z | 43 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"text-generation",
"medical",
"grpo",
"math",
"reasoning",
"fine-tuned",
"qlora",
"lora",
"multi-stage-finetuning",
"autoquant",
"conversational",
"en",
"es",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"dataset:unsloth/OpenMathReasoning-mini",
"dataset:mlabonne/guanaco-llama2-1k",
"dataset:madrylab/gsm8k-platinum",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T17:33:42Z | ---
license: apache-2.0
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
- unsloth/OpenMathReasoning-mini
- mlabonne/guanaco-llama2-1k
- madrylab/gsm8k-platinum
language:
- en
- es
base_model:
- Qwen/Qwen3-8B
pipeline_tag: text-generation
library_name: transformers
tags:
- medical
- grpo
- math
- reasoning
- fine-tuned
- qlora
- lora
- multi-stage-finetuning
- autoquant
- gguf
---
# Qwen3-8B-MultiStage-Finetune-Hybrid
## Model Description
This is a **fine-tuned** version of the **Qwen/Qwen3-8B** large language model. It's specialized through a multi-stage training pipeline focusing on **medical reasoning**, **mathematical problem-solving**, and **general conversational abilities**. The model was trained using **QLoRA** (Quantized Low-Rank Adaptation) and **GRPO** (Generative Reinforcement Learning with Policy Optimization) for both efficiency and enhanced performance in its specialized domains.
The training methodology uses a progressive approach, building capabilities in distinct areas before consolidating them:
1. **Medical Reasoning SFT:** Initial fine-tuning on a specialized medical dataset to adapt the model to medical explanations and reasoning.
2. **Mathematical SFT:** Further fine-tuning on a mathematical dataset to enhance its ability to solve math problems.
3. **Mathematical GRPO:** A reinforcement learning stage that leverages a reward function to optimize the model's accuracy and ability to provide structured mathematical solutions, particularly with answers in a `\boxed{}` format.
4. **General Chat SFT:** Final fine-tuning on a diverse chat dataset to improve conversational fluency, helpfulness, and alignment with common dialogue patterns.
## Training Details
### Training Data
The model was trained on a carefully selected set of public datasets:
* **Medical Reasoning:** `FreedomIntelligence/medical-o1-reasoning-SFT`
* **Mathematical Reasoning:** `unsloth/OpenMathReasoning-mini`
* **General Conversation:** `mlabonne/guanaco-llama2-1k`
### Training Procedure
The model was fine-tuned using a hybrid approach that combines the efficient training capabilities of the `unsloth` library with the advanced reinforcement learning features of `trl`:
* **Base Model:** Qwen/Qwen3-8B
* **Quantization:** 4-bit NormalFloat (NF4) with double quantization enabled (`bnb_4bit_use_double_quant=True`), allowing for efficient training on limited GPU memory.
* **LoRA Configuration:** A rank of `r=24`, `lora_alpha=32`, and `lora_dropout=0.05` was applied. Key attention and feed-forward projection layers (`q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`) were targeted for adaptation.
* **Gradient Checkpointing:** Enabled for memory efficiency, with `recompute_grad=True` for Unsloth-specific optimizations.
* **Dynamic Hyperparameters:** Batch sizes and gradient accumulation steps were adaptively adjusted per training stage to optimize GPU memory utilization and training throughput.
* **Learning Rate Schedule:** A cosine decay schedule with a warmup ratio was used. Learning rates were customized for each training stage.
* **Optimizer:** `adamw_8bit`.
* **Regularization:** Gradient norm clipping (`max_grad_norm=1.0`) and weight decay (`0.01`) were applied to prevent exploding gradients and overfitting.
* **Early Stopping:** Applied during SFT stages with a patience of 2 on validation loss, stopping training if no significant improvement was observed.
* **Hardware:** Training was performed on a single GPU.
* **Software Stack:** Python, Hugging Face `transformers`, `unsloth`, `trl`, `datasets`, `wandb` (for experiment tracking), and `vllm` (used during the GRPO stage for efficient text generation).
## Usage
This model is designed for **text generation**, particularly in response to chat-based prompts or specific medical and mathematical queries. To get the best results, ensure your prompts are formatted correctly following the model's training structure.
### Load the Model
```python
import torch
from transformers import AutoTokenizer, BitsAndBytesConfig
from unsloth import FastLanguageModel
# Configuration parameters (matching training)
MAX_SEQ_LENGTH = 2048
LOAD_IN_4BIT = True
USE_DOUBLE_QUANT = True
# Initialize BitsAndBytesConfig as used during training
bnb_config = BitsAndBytesConfig(
load_in_4bit=LOAD_IN_4BIT,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
bnb_4bit_use_double_quant=USE_DOUBLE_QUANT,
)
# Replace with the actual path to your uploaded model on Hugging Face Hub
model_id = "your-huggingface-username/Qwen3-8B-MultiStage-Finetune-Hybrid"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token # Ensure pad token is set for generation
# Load the model using Unsloth's optimized loading
model = FastLanguageModel.from_pretrained(
model_id,
quantization_config=bnb_config,
max_seq_length=MAX_SEQ_LENGTH,
device_map="auto", # Automatically maps model to available GPUs
)
# Example for a general chat interaction
messages = [
{"role": "system", "content": "You are a friendly and helpful assistant."},
{"role": "user", "content": "Tell me a short, funny story about a clumsy robot."},
]
# Apply the chat template and tokenize inputs
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True, # Important: Add the prompt for the assistant's turn
return_tensors="pt"
).to("cuda") # Move inputs to GPU
# Generate outputs
outputs = model.generate(
input_ids,
max_new_tokens=512, # Maximum tokens to generate
do_sample=True, # Enable sampling for more diverse outputs
temperature=0.7, # Control randomness
top_p=0.95 # Nucleus sampling
)
# Decode and print the generated text, skipping special tokens
print("--- General Chat Example ---")
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Example for a math problem (model is trained to provide a \boxed{} answer)
math_messages = [
{"role": "system", "content": "You are a math solver. Provide your reasoning within \\ and the final answer in \\boxed{} format."},
{"role": "user", "content": "If a car travels at 80 km/h for 2.5 hours, and then at 60 km/h for another 1.5 hours, what is the total distance traveled?"},
]
# Apply math chat template and tokenize inputs
math_input_ids = tokenizer.apply_chat_template(
math_messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to("cuda")
# Generate outputs for the math problem
math_outputs = model.generate(
math_input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.6, # Slightly lower temperature for more deterministic math outputs
top_p=0.9
)
print("\n--- Math Example ---")
print(tokenizer.decode(math_outputs[0], skip_special_tokens=True))
Limitations and Bias
As a large language model, this fine-tuned Qwen-8B model inherits general limitations and potential biases from its extensive pre-training and fine-tuning data:
Hallucinations: The model may generate information that is factually incorrect or nonsensical. Always cross-reference critical information.
Factual Accuracy: While specialized in medical and mathematical domains, it should not be used as a substitute for professional medical advice, complex mathematical proofs, or any domain requiring absolute precision without independent verification.
Bias: The model's outputs are influenced by the biases present in its training data (both the base model's pre-training and the fine-tuning datasets). This may manifest in stereotypical, harmful, or unfair content.
Language Proficiency: Primarily trained on English text. While some Spanish content was present in the general chat dataset, its proficiency in Spanish or other languages is not guaranteed and may vary.
Context Window: Limited by its max_seq_length (2048 tokens). Very long inputs or extensive multi-turn conversations might lead to degraded performance or truncation of context.
Ethical Considerations
Users should be aware of the following ethical considerations when deploying or using this model:
Not for Critical Applications: This model is intended for research, experimentation, and exploratory applications. It is not designed or validated for use in critical systems where accuracy, reliability, and safety are paramount (e.g., medical diagnosis, financial advice, legal counsel, or decision-making systems impacting individuals).
Responsible AI Use: Deploy and use this model responsibly, adhering to ethical AI guidelines and principles. Implement safeguards to monitor its outputs and prevent potential misuse, discrimination, or the generation of harmful content.
Data Privacy and Security: Do not use this model with sensitive personal identifiable information (PII) or confidential data. Ensure compliance with all relevant data privacy regulations.
Transparency: Be transparent with end-users when they are interacting with an AI system.
Citation
If you use this model or the training methodology, please consider citing the following key components:
Code snippet
@misc{qwen3,
author = {Qwen Team},
title = {Qwen3-8B},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{[https://huggingface.co/Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)}}
}
@misc{unsloth,
author = {Daniel Han},
title = {Unsloth: Fast LLM Fine-tuning},
year = {2023},
publisher = {GitHub},
howpublished = {\url{[https://github.com/unsloth/unsloth](https://github.com/unsloth/unsloth)}}
}
@misc{trl,
author = {Hugging Face Team},
title = {TRL: Transformer Reinforcement Learning},
year = {2023},
publisher = {GitHub},
howpublished = {\url{[https://github.com/huggingface/trl](https://github.com/huggingface/trl)}}
}
@misc{medical_dataset,
author = {FreedomIntelligence},
title = {medical-o1-reasoning-SFT},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{[https://huggingface.co/FreedomIntelligence/medical-o1-reasoning-SFT](https://huggingface.co/FreedomIntelligence/medical-o1-reasoning-SFT)}}
}
@misc{openmathreasoning_mini,
author = {unsloth},
title = {OpenMathReasoning-mini},
year = {2023},
publisher = {Hugging Face},
howpublished = {\url{[https://huggingface.co/datasets/unsloth/OpenMathReasoning-mini](https://huggingface.co/datasets/unsloth/OpenMathReasoning-mini)}}
}
@misc{guanaco_llama2_1k,
author = {mlabonne},
title = {guanaco-llama2-1k},
year = {2023},
publisher = {Hugging Face},
howpublished = {\url{[https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k)}}
} |
jack-perlo/ResNet152-Imagenet2012 | jack-perlo | 2025-05-29T08:12:05Z | 0 | 0 | keras | [
"keras",
"tflite",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-29T08:02:33Z | ---
license: apache-2.0
---
|
mnm3/gemma-3-12b-it-mnm3-lora-gpu-bf16-labels | mnm3 | 2025-05-29T08:08:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T08:08:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yashkk3640/eCharts-v3 | yashkk3640 | 2025-05-29T08:07:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T08:05:26Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yashkk3640
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
This is version-3 for echart with updated data with geo chart, drill down, reference lines, mile stones etc.
|
Tuwhy/Llama-3.2V-11B-Sherlock-iter2 | Tuwhy | 2025-05-29T08:05:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"conversational",
"en",
"dataset:Xkev/LLaVA-CoT-100k",
"arxiv:2505.22651",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-27T17:00:09Z | ---
license: apache-2.0
language:
- en
base_model:
- meta-llama/Llama-3.2-11B-Vision-Instruct
datasets:
- Xkev/LLaVA-CoT-100k
pipeline_tag: image-text-to-text
library_name: transformers
---
# Sherlock: Self-Correcting Reasoning in Vision-Language Models
## Introduction
**Sherlock is a training framework focus on improving Vision-Language Models reasoning and self-correction capabilities.**
GitHub repo: [https://github.com/DripNowhy/Sherlock](https://github.com/DripNowhy/Sherlock)
Project Page: [https://dripnowhy.github.io/Sherlock/](https://dripnowhy.github.io/Sherlock/)
arXiv: [https://arxiv.org/abs/2505.22651](https://arxiv.org/abs/2505.22651)
Sherlock Iter2 is obtained by fine-tuning **Llama3.2-Vision-11B-Instruct** on randomly sampled 20k annotation from [LLaVA-CoT](https://huggingface.co/datasets/Xkev/LLaVA-CoT-100k), and using 5k randomly sampled question in each self-improvement iteration.
## Requirements
* `transformers==4.45.2`.
## Quick Use
### 🤗 Hugging Face Transformers
```python
from transformers import MllamaForConditionalGeneration, AutoProcessor
import torch
from PIL import Image
CORRECTION_TEMPLATE = """Below is a QUESTION from a user and an EXAMPLE RESPONSE.
Please provide a more helpful RESPONSE, improving the EXAMPLE RESPONSE by making the content even clearer, more accurate, and with a reasonable logic.
Focus on addressing the human's QUESTION step by step based on the image without including irrelevant content.
QUESTION:
{Question}
EXAMPLE RESPONSE:
{Example_Response}
Now, refine and improve the RESPONSE further. You can consider two approaches:
1. REFINEMENT: If the SUMMARY section in the response is closely related to the question, the CAPTION section accurately describes the image, the REASONING section is logically clear and correct without any contradictions, and the CONCLUSION provides an accurate answer based on the previous steps, enhance clarity, accuracy, or reasoning logic as needed.
2. NEW RESPONSE: If the SUMMARY section incorrectly summarizes the intent of the issue, the CAPTION contains content unrelated to or incorrect about the image, there are logical errors or contradictions in the REASONING, or the CONCLUSION incorrectly states the findings, please enhance the accuracy and quality of each step, and craft a more effective RESPONSE that thoroughly resolves the QUESTION.
RESPONSE:
"""
question = """Hint: Please answer the question requiring an integer answer and provide the final value, e.g., 1, 2, 3, at the end.
Question: What is the age gap between these two people in image? (Unit: years)"""
image_path = "./case.jpg"
model_path = "Tuwhy/Llama-3.2V-11B-Sherlock-iter2"
model = MllamaForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map='cpu',
).cuda().eval()
device = 'cuda'
processor = AutoProcessor.from_pretrained(model_path)
# kwargs_default = dict(do_sample=False, max_new_tokens=2048, temperature=0.0, top_p=None, num_beams=1)
kwargs_default = dict(do_sample=True, max_new_tokens=2048, temperature=0.6, top_p=0.7, num_beams=1)
image = Image.open(image_path)
messages = [
{'role': 'user', 'content': [
{'type': 'image'},
{'type': 'text', 'text': question}
]}
]
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(image, input_text, return_tensors='pt').to(device)
output = model.generate(**inputs, **kwargs_default)
response = processor.decode(output[0][inputs['input_ids'].shape[1]:]).replace('<|eot_id|>', '')
print(f"INTIAL RESPONSE: {response}")
for i in range(3):
prompt = CORRECTION_TEMPLATE.format(
Question=question,
Example_Response=response
)
messages = [
{'role': 'user', 'content': [
{'type': 'image'},
{'type': 'text', 'text': prompt}
]}
]
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(image, input_text, return_tensors='pt').to(device)
output = model.generate(**inputs, **kwargs_default)
response = processor.decode(output[0][inputs['input_ids'].shape[1]:]).replace('<|eot_id|>', '')
print(f"REFINED RESPONSE {i+1}: {response}")
``` |
Nastoi/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-curious_nocturnal_sandpiper | Nastoi | 2025-05-29T08:05:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am curious nocturnal sandpiper",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-14T14:11:41Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-curious_nocturnal_sandpiper
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am curious nocturnal sandpiper
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-curious_nocturnal_sandpiper
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Nastoi/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-curious_nocturnal_sandpiper", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BongRea/Qwen3_Rude_RAG_FULL2 | BongRea | 2025-05-29T08:00:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T07:56:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Awaitinf/Qwen3-4B-ancientstyle-roleplay-lora | Awaitinf | 2025-05-29T07:59:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"zh",
"dataset:REILX/Modern-Chinese-to-Classical-Chinese",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T05:55:25Z | ---
library_name: transformers
license: mit
datasets:
- REILX/Modern-Chinese-to-Classical-Chinese
language:
- zh
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
georgeiac00/sft_llama3_instruct_full_prec_full_data_5_ep_v2 | georgeiac00 | 2025-05-29T07:58:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T07:57:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anderslindstrom/dqn-SpaceInvadersNoFrameskip-v4 | anderslindstrom | 2025-05-29T07:57:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-29T07:57:12Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 644.00 +/- 149.21
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga anderslindstrom -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga anderslindstrom -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga anderslindstrom
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
thejaminator/generatednewlines-medium_high-4e-05-16000-mcq0-qwen3_32b | thejaminator | 2025-05-29T07:53:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T07:53:08Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HuangXinBa/q-FrozenLake-v1-8x8-noSlippery | HuangXinBa | 2025-05-29T07:52:55Z | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-29T07:16:25Z | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="HuangXinBa/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MetaphoricalCode/EVA-Gutenberg3-Qwen2.5-32B-exl3-8bpw-hb8 | MetaphoricalCode | 2025-05-29T07:51:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"base_model:quantized:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl3",
"region:us"
]
| text-generation | 2025-05-29T07:23:45Z | ---
license: apache-2.0
library_name: transformers
base_model:
- nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
base_model_relation: quantized
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
---
## Quantized using the default exllamav3 (0.0.2) quantization process.
- Original model: https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
- exllamav3: https://github.com/turboderp-org/exllamav3
---

# EVA-Gutenberg3-Qwen2.5-32B
[EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1), [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo), and [nbeerbower/gutenberg-moderne-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg-moderne-dpo).
### Method
[ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A100 for 2 epochs. |
Hahahabananan/npc-model | Hahahabananan | 2025-05-29T07:49:06Z | 4 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-29T07:23:51Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NPC
---
# Npc Model
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NPC` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "NPC",
"lora_weights": "https://huggingface.co/Hahahabananan/npc-model/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Hahahabananan/npc-model', weight_name='lora.safetensors')
image = pipeline('NPC').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Hahahabananan/npc-model/discussions) to add images that show off what you’ve made with this LoRA.
|
charun45/tinyllama-msme | charun45 | 2025-05-29T07:48:35Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-29T07:48:35Z | ---
license: apache-2.0
---
|
phospho-app/Ruth011-gr00t-example_dataset_2-6hhw3p35e8 | phospho-app | 2025-05-29T07:48:27Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
]
| null | 2025-05-29T07:42:47Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [Ruth011/example_dataset_2](https://huggingface.co/datasets/Ruth011/example_dataset_2)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 64
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Pteradonto/DialoGPT-witcher | Pteradonto | 2025-05-29T07:47:29Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T07:46:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ash2749/llama3.1_8b_instruct_appraisal | Ash2749 | 2025-05-29T07:45:57Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T07:43:05Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ash2749
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HappyBie/blpt0508 | HappyBie | 2025-05-29T07:42:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T16:03:08Z | ---
library_name: transformers
license: other
base_model: Qwen_r1_7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: '01'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 01
This model is a fine-tuned version of [/AI4M/users/wzy/model/Qwen_r1_7B](https://huggingface.co//AI4M/users/wzy/model/Qwen_r1_7B) on the statelist_wkbk_0508 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.20.3
|
hdong0/Qwen2.5-Math-1.5B-Open-R1-GRPO_deepscaler_1000steps_lr1e-6_kl1e-3_acc | hdong0 | 2025-05-29T07:42:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T23:29:07Z | ---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: agentica-org/DeepScaleR-Preview-Dataset
library_name: transformers
model_name: Qwen2.5-Math-1.5B-Open-R1-GRPO_deepscaler_1000steps_lr1e-6_kl1e-3_acc
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-Math-1.5B-Open-R1-GRPO_deepscaler_1000steps_lr1e-6_kl1e-3_acc
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/Qwen2.5-Math-1.5B-Open-R1-GRPO_deepscaler_1000steps_lr1e-6_kl1e-3_acc", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
crystalai/thoth-emerald-cybersecurity-shield-ai-auto-train-autotrain-advanced | crystalai | 2025-05-29T07:41:49Z | 0 | 1 | adapter-transformers | [
"adapter-transformers",
"chemistry",
"biology",
"legal",
"music",
"art",
"code",
"climate",
"text-generation-inference",
"merge",
"dataset:DMindAI/DMind_Benchmark",
"dataset:nvidia/OpenMathReasoning",
"dataset:nvidia/OpenCodeReasoning",
"base_model:ACE-Step/ACE-Step-v1-3.5B",
"base_model:adapter:ACE-Step/ACE-Step-v1-3.5B",
"license:c-uda",
"region:us"
]
| null | 2025-05-29T07:39:14Z | ---
license: c-uda
datasets:
- DMindAI/DMind_Benchmark
- nvidia/OpenMathReasoning
- nvidia/OpenCodeReasoning
metrics:
- character
- accuracy
base_model:
- ACE-Step/ACE-Step-v1-3.5B
- google/gemma-3n-E4B-it-litert-preview
new_version: nvidia/parakeet-tdt-0.6b-v2
library_name: adapter-transformers
tags:
- chemistry
- biology
- legal
- music
- art
- code
- climate
- text-generation-inference
- merge
--- |
Tanathep/Customize-llama3.2-8b | Tanathep | 2025-05-29T07:34:38Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-29T07:34:38Z | ---
license: apache-2.0
---
|
InstaDeepAI/nequip-organics | InstaDeepAI | 2025-05-29T07:34:00Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-01T09:46:50Z | # NequIP
## Reference
Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P. Mailoa,
Mordechai Kornbluth, Nicola Molinari, Tess E. Smidt, and Boris Kozinsky.
E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials.
Nature Communications, 13(1), May 2022. ISSN: 2041-1723. URL: https://dx.doi.org/10.1038/s41467-022-29939-5.
## How to Use
For complete usage instructions, please refer to our [documentation](https://instadeepai.github.io/mlip)
## Model architecture
| Parameter | Value | Description |
|---------------------------|-----------------------------------------------|---------------------------------------------|
| `num_layers` | `5` | Number of NequIP layers. |
| `node_irreps` | `64x0e + 64x0o + 32x1e + 32x1o + 4x2e + 4x2o` | O3 representation space of node features. |
| `l_max` | `2` | Maximal degree of spherical harmonics. |
| `num_bessel` | `8` | Number of Bessel basis functions. |
| `radial_net_nonlinearity` | `swish` | Activation function for radial MLP. |
| `radial_net_n_hidden` | `64` | Number of hidden features in radial MLP. |
| `radial_net_n_layers` | `2` | Number of layers in radial MLP. |
| `radial_envelope` | `polynomial_envelope` | Radial envelope function. |
| `scalar_mlp_std` | `4` | Standard deviation of weight initialisation.|
| `atomic_energies` | `None` | Treatment of the atomic energies. |
| `avg_um_neighbors` | `None` | Mean number of neighbors. |
For more information about NequIP hyperparameters,
please refer to our [documentation](https://instadeepai.github.io/mlip/api_reference/models/nequip.html#mlip.models.nequip.config.NequipConfig)
## Training
Training is performed over 220 epochs, with an exponential moving average (EMA) decay rate of 0.99.
The model employs a Huber loss function with scheduled weights for the energy and force components.
Initially, the energy term is weighted at 40 and the force term at 1000.
At epoch 115, these weights are flipped.
We use our default MLIP optimizer in v1.0.0 with the following settings:
| Parameter | Value | Description |
|----------------------------------|----------------|-----------------------------------------------------------------|
| `init_learning_rate` | `0.002` | Initial learning rate. |
| `peak_learning_rate` | `0.002` | Peak learning rate. |
| `final_learning_rate` | `0.002` | Final learning rate. |
| `weight_decay` | `0` | Weight decay. |
| `warmup_steps` | `4000` | Number of optimizer warm-up steps. |
| `transition_steps` | `360000` | Number of optimizer transition steps. |
| `grad_norm` | `500` | Gradient norm used for gradient clipping. |
| `num_gradient_accumulation_steps`| `1` | Steps to accumulate before taking an optimizer step. |
For more information about the optimizer,
please refer to our [documentation](https://instadeepai.github.io/mlip/api_reference/training/optimizer.html#mlip.training.optimizer_config.OptimizerConfig)
## Dataset
| Parameter | Value | Description |
|-----------------------------|-------|--------------------------------------------|
| `graph_cutoff_angstrom` | `5` | Graph cutoff distance (in Å). |
| `max_n_node` | `32` | Maximum number of nodes allowed in a batch.|
| `max_n_edge` | `288` | Maximum number of edges allowed in a batch.|
| `batch_size` | `16` | Number of graphs in a batch. |
This model was trained on the [SPICE2_curated dataset](https://huggingface.co/datasets/InstaDeepAI/SPICE2-curated).
For more information about dataset configuration
please refer to our [documentation](https://instadeepai.github.io/mlip/api_reference/data/dataset_configs.html#mlip.data.configs.GraphDatasetBuilderConfig)
## License summary
1. The Licensed Models are **only** available under this License for Non-Commercial Purposes.
2. You are permitted to reproduce, publish, share and adapt the Output generated by the Licensed Model only for Non-Commercial Purposes and in accordance with this License.
3. You may **not** use the Licensed Models or any of its Outputs in connection with:
1. any Commercial Purposes, unless agreed by Us under a separate licence;
2. to train, improve or otherwise influence the functionality or performance of any other third-party derivative model that is commercial or intended for a Commercial Purpose and is similar to the Licensed Models;
3. to create models distilled or derived from the Outputs of the Licensed Models, unless such models are for Non-Commercial Purposes and open-sourced under the same license as the Licensed Models; or
4. in violation of any applicable laws and regulations. |
Master019/Nvidia-rtx-090 | Master019 | 2025-05-29T07:33:55Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-29T07:33:55Z | ---
license: apache-2.0
---
|
thejaminator/generatednewlines-medium_high-4e-05-8000-mcq0-qwen3_32b | thejaminator | 2025-05-29T07:32:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T07:32:29Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
2yunadaaa/qwen3-4b-3kingdoms | 2yunadaaa | 2025-05-29T07:30:17Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T07:26:08Z | ---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 2yunadaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
duandeyun/my-awesome-model | duandeyun | 2025-05-29T07:29:20Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-29T06:27:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coldchair16/CPRetriever-Code | coldchair16 | 2025-05-29T07:28:33Z | 19 | 0 | null | [
"pytorch",
"codexembed2b",
"custom_code",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2025-05-14T09:01:31Z | ---
license: cc-by-nc-4.0
---
# CPRetriever-Code
**CPRetriever-Code** is a code embedding model trained via contrastive learning for **code-related retrieval tasks** in competitive programming. It achieves strong performance on tasks such as:
* **Text-to-Code** retrieval (problem description → relevant code)
* **Code-to-Code** retrieval (find alternate solutions to the same problem)
This model is part of the [CPRet](https://github.com/coldchair/CPRet) suite for competitive programming retrieval research.
## 🔧 Usage
You can load this model using the `sentence-transformers` library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("coldchair16/CPRetriever-Code")
embeddings = model.encode([
"def mex_query(arr):\n n = len(arr)\n seen = set()\n for i in range(n):\n seen.add(arr[i])\n i = 0\n while True:\n if i not in seen:\n return i\n i += 1"
])
```
## 💡 Applications
This model is optimized for **code-level semantic retrieval** in competitive programming settings:
* **Text-to-Code**: Retrieve relevant code snippets given a natural language problem description.
* **Code-to-Code**: Retrieve alternative implementations of the same problem.
It is particularly effective for analyzing programming contest submissions, searching solution variants, and building educational tools for code understanding.
## 📚 Training and Evaluation
CPRetriever-Code is trained via **contrastive learning** using positive and hard negative code pairs derived from [CPRet-data](https://huggingface.co/datasets/coldchair16/CPRet-data).
For the training pipeline, see the full project:
👉 [CPRet on GitHub](https://github.com/coldchair/CPRet?tab=readme-ov-file)
## 📦 Model Card
* Architecture: `Salesforce/SFR-Embedding-Code-2B_R` (encoder backbone)
* Training: Contrastive objective on code/code and text/code pairs
* Format: Compatible with `sentence-transformers`
|
Fiononana/parler-tts-mini-v1-Baiboly-colab-v10 | Fiononana | 2025-05-29T07:28:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-29T07:25:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb90owed03ds1b1ykb0fywr1_cmb90z0wz03jr1b1yj0xsopvh | BootesVoid | 2025-05-29T07:27:19Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-29T07:27:18Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NECKLACE
---
# Cmb90Owed03Ds1B1Ykb0Fywr1_Cmb90Z0Wz03Jr1B1Yj0Xsopvh
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NECKLACE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "NECKLACE",
"lora_weights": "https://huggingface.co/BootesVoid/cmb90owed03ds1b1ykb0fywr1_cmb90z0wz03jr1b1yj0xsopvh/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb90owed03ds1b1ykb0fywr1_cmb90z0wz03jr1b1yj0xsopvh', weight_name='lora.safetensors')
image = pipeline('NECKLACE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb90owed03ds1b1ykb0fywr1_cmb90z0wz03jr1b1yj0xsopvh/discussions) to add images that show off what you’ve made with this LoRA.
|
uncleMehrzad/speaker-segmentation-fine-tuned-common-voice-11-fa | uncleMehrzad | 2025-05-29T07:26:54Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"fa",
"dataset:uncleMehrzad/synthetic-speaker-diarization-dataset-fa",
"base_model:pyannote/speaker-diarization-3.1",
"base_model:finetune:pyannote/speaker-diarization-3.1",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-25T10:05:58Z | ---
library_name: transformers
language:
- fa
license: mit
base_model: pyannote/speaker-diarization-3.1
tags:
- speaker-diarization
- speaker-segmentation
- generated_from_trainer
datasets:
- uncleMehrzad/synthetic-speaker-diarization-dataset-fa
model-index:
- name: speaker-segmentation-fine-tuned-common-voice-11-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-common-voice-11-fa
This model is a fine-tuned version of [pyannote/speaker-diarization-3.1](https://huggingface.co/pyannote/speaker-diarization-3.1) on the uncleMehrzad/synthetic-speaker-diarization-dataset-fa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5764
- Model Preparation Time: 0.0072
- Der: 0.1832
- False Alarm: 0.0299
- Missed Detection: 0.0302
- Confusion: 0.1232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 25
### Training results
## the results with default model was 27%.
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.6235 | 1.0 | 142 | 0.5833 | 0.0072 | 0.2027 | 0.0341 | 0.0318 | 0.1369 |
| 0.5425 | 2.0 | 284 | 0.5785 | 0.0072 | 0.1927 | 0.0299 | 0.0309 | 0.1319 |
| 0.5053 | 3.0 | 426 | 0.5544 | 0.0072 | 0.1966 | 0.0254 | 0.0386 | 0.1326 |
| 0.5178 | 4.0 | 568 | 0.5480 | 0.0072 | 0.1916 | 0.0278 | 0.0325 | 0.1313 |
| 0.4992 | 5.0 | 710 | 0.5320 | 0.0072 | 0.1826 | 0.0304 | 0.0289 | 0.1233 |
| 0.4599 | 6.0 | 852 | 0.5178 | 0.0072 | 0.1742 | 0.0296 | 0.0283 | 0.1163 |
| 0.4328 | 7.0 | 994 | 0.5211 | 0.0072 | 0.1796 | 0.0284 | 0.0306 | 0.1206 |
| 0.4071 | 8.0 | 1136 | 0.5275 | 0.0072 | 0.1838 | 0.0253 | 0.0316 | 0.1269 |
| 0.406 | 9.0 | 1278 | 0.5379 | 0.0072 | 0.1801 | 0.0263 | 0.0335 | 0.1203 |
| 0.4051 | 10.0 | 1420 | 0.5525 | 0.0072 | 0.1847 | 0.0335 | 0.0281 | 0.1230 |
| 0.4026 | 11.0 | 1562 | 0.5439 | 0.0072 | 0.1883 | 0.0278 | 0.0324 | 0.1281 |
| 0.3701 | 12.0 | 1704 | 0.5389 | 0.0072 | 0.1810 | 0.0297 | 0.0336 | 0.1176 |
| 0.3716 | 13.0 | 1846 | 0.5232 | 0.0072 | 0.1798 | 0.0315 | 0.0311 | 0.1172 |
| 0.3736 | 14.0 | 1988 | 0.5616 | 0.0072 | 0.1843 | 0.0289 | 0.0313 | 0.1241 |
| 0.3557 | 15.0 | 2130 | 0.5332 | 0.0072 | 0.1770 | 0.0301 | 0.0305 | 0.1164 |
| 0.323 | 16.0 | 2272 | 0.5520 | 0.0072 | 0.1796 | 0.0292 | 0.0312 | 0.1191 |
| 0.3261 | 17.0 | 2414 | 0.5491 | 0.0072 | 0.1750 | 0.0318 | 0.0303 | 0.1129 |
| 0.3335 | 18.0 | 2556 | 0.5357 | 0.0072 | 0.1736 | 0.0299 | 0.0302 | 0.1135 |
| 0.3291 | 19.0 | 2698 | 0.5604 | 0.0072 | 0.1773 | 0.0288 | 0.0308 | 0.1177 |
| 0.3504 | 20.0 | 2840 | 0.5640 | 0.0072 | 0.1804 | 0.0302 | 0.0302 | 0.1200 |
| 0.3384 | 21.0 | 2982 | 0.5602 | 0.0072 | 0.1822 | 0.0310 | 0.0299 | 0.1214 |
| 0.3139 | 22.0 | 3124 | 0.5632 | 0.0072 | 0.1808 | 0.0296 | 0.0302 | 0.1210 |
| 0.3051 | 23.0 | 3266 | 0.5634 | 0.0072 | 0.1825 | 0.0298 | 0.0302 | 0.1225 |
| 0.3097 | 24.0 | 3408 | 0.5764 | 0.0072 | 0.1833 | 0.0299 | 0.0302 | 0.1233 |
| 0.3177 | 25.0 | 3550 | 0.5764 | 0.0072 | 0.1832 | 0.0299 | 0.0302 | 0.1232 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nis12ram/Nemotron-4-Mini-Hindi-4B-intermediate-gliner-en-exp3 | nis12ram | 2025-05-29T07:26:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"nemotron",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct",
"base_model:finetune:nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T07:19:24Z | ---
base_model: nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- nemotron
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nis12ram
- **License:** apache-2.0
- **Finetuned from model :** nis12ram/Nemotron-4-Mini-Hindi-4B-Instruct
This nemotron model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nililba/MNLP_M2_document_encoder | nililba | 2025-05-29T07:25:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-29T07:19:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MetaphoricalCode/EVA-Gutenberg3-Qwen2.5-32B-exl3-6bpw-hb6 | MetaphoricalCode | 2025-05-29T07:21:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:nbeerbower/gutenberg-moderne-dpo",
"base_model:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"base_model:quantized:nbeerbower/EVA-Gutenberg3-Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl3",
"region:us"
]
| text-generation | 2025-05-29T07:01:40Z | ---
license: apache-2.0
library_name: transformers
base_model:
- nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
base_model_relation: quantized
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
---
## Quantized using the default exllamav3 (0.0.2) quantization process.
- Original model: https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
- exllamav3: https://github.com/turboderp-org/exllamav3
---

# EVA-Gutenberg3-Qwen2.5-32B
[EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1), [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo), and [nbeerbower/gutenberg-moderne-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg-moderne-dpo).
### Method
[ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A100 for 2 epochs. |
DevQuasar/sarvamai.sarvam-m-GGUF | DevQuasar | 2025-05-29T07:20:49Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:sarvamai/sarvam-m",
"base_model:quantized:sarvamai/sarvam-m",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-29T03:01:23Z | ---
base_model:
- sarvamai/sarvam-m
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [sarvamai/sarvam-m](https://huggingface.co/sarvamai/sarvam-m)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Khal5454/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog | Khal5454 | 2025-05-29T07:18:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bristly beaked hedgehog",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-15T00:41:50Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bristly beaked hedgehog
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Khal5454/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_beaked_hedgehog", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
while0628/1B_merged_model_test_dt | while0628 | 2025-05-29T07:18:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T07:15:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ItzAwnishraj1437/Awnish_001 | ItzAwnishraj1437 | 2025-05-29T07:17:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-29T07:14:41Z | ---
license: apache-2.0
---
|
LynnMyatBhone/MyanmarNER | LynnMyatBhone | 2025-05-29T07:16:24Z | 0 | 0 | null | [
"my",
"dataset:LULab/myPOS",
"base_model:LynnMyatBhone/MyanmarNER",
"base_model:finetune:LynnMyatBhone/MyanmarNER",
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2025-05-29T07:09:42Z | ---
license: cc-by-nc-sa-4.0
datasets:
- LULab/myPOS
language:
- my
metrics:
- accuracy
base_model:
- LynnMyatBhone/MyanmarNER
--- |
Gnider/xlm-roberta-comet-small-qa-5ep-10mlntokens | Gnider | 2025-05-29T07:15:41Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| question-answering | 2025-05-29T05:51:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yamatazen/HMS-Fusion-12B | yamatazen | 2025-05-29T07:15:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"ja",
"base_model:shisa-ai/shisa-v2-mistral-nemo-12b",
"base_model:merge:shisa-ai/shisa-v2-mistral-nemo-12b",
"base_model:yamatazen/Himeyuri-Magnum-12B",
"base_model:merge:yamatazen/Himeyuri-Magnum-12B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T06:24:47Z | ---
base_model:
- yamatazen/Himeyuri-Magnum-12B
- shisa-ai/shisa-v2-mistral-nemo-12b
library_name: transformers
tags:
- mergekit
- merge
language:
- en
- ja
---

# HMS-Fusion-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Arcee Fusion](https://arcee.ai) merge method using [shisa-ai/shisa-v2-mistral-nemo-12b](https://huggingface.co/shisa-ai/shisa-v2-mistral-nemo-12b) as a base.
### Models Merged
The following models were included in the merge:
* [yamatazen/Himeyuri-Magnum-12B](https://huggingface.co/yamatazen/Himeyuri-Magnum-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: arcee_fusion
dtype: bfloat16
out_dtype: bfloat16
base_model: shisa-ai/shisa-v2-mistral-nemo-12b
models:
- model: yamatazen/Himeyuri-Magnum-12B
``` |
mojtabataghiabadi/gapyar1-gguf | mojtabataghiabadi | 2025-05-29T07:14:32Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T07:13:30Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mojtabataghiabadi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AshwiniFromIITK/gemma-3-0_1b_NewDS1.0 | AshwiniFromIITK | 2025-05-29T07:13:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T07:13:30Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AshwiniFromIITK
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mmmanuel/SFT_TULU3_disable | mmmanuel | 2025-05-29T07:13:43Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T07:10:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rtl-llm/qwen2.5coder-7b-origen-chisel-pymtl-vhdl-verilog-truncate-interleave | rtl-llm | 2025-05-29T07:13:08Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T07:09:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb903dac032l1b1y7xtyt64c_cmb909bl5035r1b1yqkzpyo6c | BootesVoid | 2025-05-29T07:11:15Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-29T07:11:08Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BRUNETTE
---
# Cmb903Dac032L1B1Y7Xtyt64C_Cmb909Bl5035R1B1Yqkzpyo6C
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BRUNETTE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BRUNETTE",
"lora_weights": "https://huggingface.co/BootesVoid/cmb903dac032l1b1y7xtyt64c_cmb909bl5035r1b1yqkzpyo6c/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb903dac032l1b1y7xtyt64c_cmb909bl5035r1b1yqkzpyo6c', weight_name='lora.safetensors')
image = pipeline('BRUNETTE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb903dac032l1b1y7xtyt64c_cmb909bl5035r1b1yqkzpyo6c/discussions) to add images that show off what you’ve made with this LoRA.
|
carsontj/my-custom-codeparrot-tokenizer | carsontj | 2025-05-29T07:10:53Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T07:10:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Necent/distilbert-base-uncased-detected-jailbreak | Necent | 2025-05-29T07:02:07Z | 2,076 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"jailbreak-detection",
"en",
"dataset:your-dataset-id",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-04-17T13:44:25Z | ---
language: en
license: apache-2.0
tags:
- text-classification
- jailbreak-detection
pipeline_tag: text-classification
library_name: transformers
datasets:
- your-dataset-id
metrics:
- accuracy
base_model: distilbert-base-uncased
---
# 🧠 DistilBERT for Jailbreak Detection
Модель на основе DistilBERT для обнаружения попыток обхода фильтров (jailbreak) в текстах.
## 📚 Детали модели
- **Архитектура**: DistilBERT
- **Задача**: Классификация текста (обнаружение jailbreak)
- **Входные данные**: Текстовые строки
- **Выходные данные**: Метка класса (например, `jailbreak` или `safe`) |
TharushiDinushika/all-MiniLM-L12-v2-legal | TharushiDinushika | 2025-05-29T06:59:14Z | 29 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9565",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L12-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-20T11:13:10Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:9565
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L12-v2
widget:
- source_sentence: what effect does the institution of an action against one of the
co debtors have on the other debtors in an obligation that is jointly and severally
liable?
sentences:
- the forfeiture of the bond implies solely and simply, unless on equitable grounds
some mitigation of the penalty is ordered, the payment of the penal sum and nothing
else. the court is not entitled to go beyond the penal sum and order the surety
to pay the actual amount of the costs incurred.
- where two debtors are jointly and severally liable, the institution of action
against one of them interrupts the course of prescription against the other.
- driving an omnibus not fitted with an accurate speedometer or with seats not fitted
with cushions does not amount to a use of the vehicle in contravention of regulations
15 and 45 of the regulations made under sections 19 and 239 of the motor traffic
act so as to render the driver guilty of an offence under section 226 read with
section 216 2 b .
- source_sentence: in a judgement debt case how the costs in the court should be taxed?
sentences:
- held, that the election judge was wrong in upholding the objection to the production
of the reports of the police constables. the record of a speech made in public
by a candidate, or his agent, is not an unpublished official record relating to
any affairs of state within the meaning of section 123 of the evidence ordinance.
- held, that the omissions complained of were of a trivial nature and aro30 from
inadvertence within the meaning of section 73 of the parliamentary elections order
in council.
- the costs in the court below must be taxed as though the proceeding had been not
an action, but a petition under section 349.
- source_sentence: how should the word proceeding in section 264 be interpreted?
sentences:
- the word proceeding appearing in section 264 should be construed in the light
of the word action and will encompass any proceeding instituted against the company
before an institution established by law for the administration of justice.
- the classification of goods so far as the customs declaration and or inquiry is
concerned is not by the 4th respondent slsi whose classification has no binding
effect on the sri lanka customs. the slsi act has no provision directing the customs
to adopt its standards for such purpose.
- a search warrant under section 17 of the betting on horse racing ordinance may
be issued by a magistrate upon evidence on which he has every reason to suspect
that an offence against the ordinance is being committed.
- source_sentence: when does the maxim res ipsa loquitur apply in a negligence case
involving a vehicle accident?
sentences:
- a right of appeal must be specifically provided for. in the absence of a specific
right of appeal and in the absence of any provision in the act incorporating the
provisions of the civil procedure code, there is no right to make an application
for leave to appeal.
- the maxim res ipsa loquitur applies and the proved facts constituted, in the absence
of an explanation, prima facie evidence of negligence.
- where there has been an interlocutory decree for partition. a court of first instance
has no power to set it aside and order a sale on the ground that a satisfactory
partition is impracticable.
- source_sentence: does a person having an interest in property within the meaning
of the civil procedure code, entitled to make the application?
sentences:
- it is the duty of the court to regard the fitness of the proctor to continue in
the profession from the same angle as it should regard his fitness if he was a
candidate for enrolment.
- held, that the applicant was a person having an interest in the property within
the meaning of section 282 of the civil procedure code, and was entitled to make
the application.
- section 96 specifically refers to rule 3 as being the only rule which has to be
complied with to render a building fit for human habitation.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: all-MiniLM-L12-v2 Legal
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: retrieval evaluation
type: retrieval_evaluation
metrics:
- type: cosine_accuracy@1
value: 0.6547507055503292
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8165569143932268
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8504233301975541
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8927563499529633
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6547507055503292
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2721856381310755
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1700846660395108
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08927563499529634
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6547507055503292
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8165569143932268
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8504233301975541
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8927563499529633
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.778250968340416
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7410484701876996
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7440200668058536
name: Cosine Map@100
---
# all-MiniLM-L12-v2 Legal
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) <!-- at revision c004d8e3e901237d8fa7e9fff12774962e391ce5 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("TharushiDinushika/all-MiniLM-L12-v2-legal")
# Run inference
sentences = [
'does a person having an interest in property within the meaning of the civil procedure code, entitled to make the application?',
'held, that the applicant was a person having an interest in the property within the meaning of section 282 of the civil procedure code, and was entitled to make the application.',
'section 96 specifically refers to rule 3 as being the only rule which has to be complied with to render a building fit for human habitation.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `retrieval_evaluation`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6548 |
| cosine_accuracy@3 | 0.8166 |
| cosine_accuracy@5 | 0.8504 |
| cosine_accuracy@10 | 0.8928 |
| cosine_precision@1 | 0.6548 |
| cosine_precision@3 | 0.2722 |
| cosine_precision@5 | 0.1701 |
| cosine_precision@10 | 0.0893 |
| cosine_recall@1 | 0.6548 |
| cosine_recall@3 | 0.8166 |
| cosine_recall@5 | 0.8504 |
| cosine_recall@10 | 0.8928 |
| **cosine_ndcg@10** | **0.7783** |
| cosine_mrr@10 | 0.741 |
| cosine_map@100 | 0.744 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 9,565 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.77 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 56.38 tokens</li><li>max: 263 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>how are ambiguous clauses interpreted in insurance contracts?</code> | <code>b in contracts of insurance where the language used by the insurer is ambiguous the courts will lean in favour of that interpretation which favours the assured””.</code> |
| <code>what procedure must be followed when inspecting a scene of offense?</code> | <code>where there is an inspection by court of the locus in quo, statements made at the spot by witnesses should be made on oath or affirmation and an opportunity should be given to the parties to cross examine the witnesses</code> |
| <code>what court has jurisdiction over disputes regarding non payment of paddy prices by a cooperative society to its members?</code> | <code>where the purchase of paddy from its members is a part of the business of a co operative society, section 45 1 of the co operative societies ordinance ousts the jurisdiction of the ordinary courts in regard to a dispute between a member and the society over non payment of the price of paddy purchased.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 1,063 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 18.76 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 57.08 tokens</li><li>max: 456 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what direction should have been given to the jury regarding the second accused testimony?</code> | <code>held, that the jury should have been directed that the testimony the 2nd accused, as stated above, by means of a statement made to be considered by the jury in determining the guilt or innocence of each of the accused.</code> |
| <code>when is the inference of a wagering transaction destroyed?</code> | <code>where the documents show an ordinary commercial transaction, and in conformity with them one of the parties incurs personal obligations on a genuine transaction with third parties so that he himself is not a winner or loser by the alteration of price, but can only benefit by his commission, the inference of betting is irresistibly destroyed.</code> |
| <code>when does mandamus can issue and when an appeal?</code> | <code>mandamus may issue where a magistrate has refused to exercise juris diction; but where he has exercised jurisdiction and decided that he ought not to grant a summons, the proper remedy is an appeal.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 15
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | retrieval_evaluation_cosine_ndcg@10 |
|:--------:|:-------:|:-------------:|:---------------:|:-----------------------------------:|
| -1 | -1 | - | - | 0.6722 |
| 0.5351 | 10 | 5.1306 | - | - |
| 1.0 | 19 | - | 0.1021 | 0.7306 |
| 1.0535 | 20 | 3.826 | - | - |
| 1.5886 | 30 | 3.0628 | - | - |
| 2.0 | 38 | - | 0.0783 | 0.7660 |
| 2.1070 | 40 | 2.373 | - | - |
| 2.6421 | 50 | 2.1848 | - | - |
| 3.0 | 57 | - | 0.0739 | 0.7666 |
| 3.1605 | 60 | 2.0337 | - | - |
| 3.6957 | 70 | 1.7101 | - | - |
| 4.0 | 76 | - | 0.0701 | 0.7657 |
| 4.2140 | 80 | 1.5003 | - | - |
| 4.7492 | 90 | 1.5223 | - | - |
| 5.0 | 95 | - | 0.0695 | 0.7696 |
| 5.2676 | 100 | 1.3953 | - | - |
| 5.8027 | 110 | 1.175 | - | - |
| 6.0 | 114 | - | 0.0683 | 0.7733 |
| 6.3211 | 120 | 1.0839 | - | - |
| 6.8562 | 130 | 1.0829 | - | - |
| 7.0 | 133 | - | 0.0679 | 0.7756 |
| 7.3746 | 140 | 0.9821 | - | - |
| 7.9097 | 150 | 0.9947 | - | - |
| 8.0 | 152 | - | 0.0674 | 0.7757 |
| 8.4281 | 160 | 0.8119 | - | - |
| 8.9632 | 170 | 0.7784 | - | - |
| 9.0 | 171 | - | 0.0674 | 0.7775 |
| 9.4816 | 180 | 0.7061 | - | - |
| 10.0 | 190 | 0.7888 | 0.0677 | 0.7772 |
| 10.5351 | 200 | 0.7347 | - | - |
| 11.0 | 209 | - | 0.0673 | 0.7767 |
| 11.0535 | 210 | 0.765 | - | - |
| 11.5886 | 220 | 0.6773 | - | - |
| **12.0** | **228** | **-** | **0.0676** | **0.7783** |
| 12.1070 | 230 | 0.6999 | - | - |
| 12.6421 | 240 | 0.6476 | - | - |
| 13.0 | 247 | - | 0.0673 | 0.7772 |
| 13.1605 | 250 | 0.7193 | - | - |
| 13.6957 | 260 | 0.7584 | - | - |
| 14.0 | 266 | - | 0.0672 | 0.7772 |
| 14.2140 | 270 | 0.6345 | - | - |
| 14.7492 | 280 | 0.7013 | - | - |
| 15.0 | 285 | - | 0.0676 | 0.7783 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
BootesVoid/cmb8vfnjy00hr1b1y5hd1evv2_cmb8zur1302xl1b1ysbnz0o1p | BootesVoid | 2025-05-29T06:57:17Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-29T06:57:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: PINEAPPLE
---
# Cmb8Vfnjy00Hr1B1Y5Hd1Evv2_Cmb8Zur1302Xl1B1Ysbnz0O1P
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PINEAPPLE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "PINEAPPLE",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8vfnjy00hr1b1y5hd1evv2_cmb8zur1302xl1b1ysbnz0o1p/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8vfnjy00hr1b1y5hd1evv2_cmb8zur1302xl1b1ysbnz0o1p', weight_name='lora.safetensors')
image = pipeline('PINEAPPLE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8vfnjy00hr1b1y5hd1evv2_cmb8zur1302xl1b1ysbnz0o1p/discussions) to add images that show off what you’ve made with this LoRA.
|
vertings6/35725d25-ddf8-483d-a0d8-44b5dc16bf7d | vertings6 | 2025-05-29T06:56:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-29T03:56:12Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 35725d25-ddf8-483d-a0d8-44b5dc16bf7d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 70dd5e9fd01a2b45_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vertings6/35725d25-ddf8-483d-a0d8-44b5dc16bf7d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/70dd5e9fd01a2b45_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c62de77-8807-4ee2-b30b-428de90cbcd7
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 0c62de77-8807-4ee2-b30b-428de90cbcd7
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 35725d25-ddf8-483d-a0d8-44b5dc16bf7d
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.599 | 0.0000 | 1 | 2.1146 |
| 0.9972 | 0.0054 | 250 | 1.5016 |
| 1.2115 | 0.0108 | 500 | 1.4317 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xl-zhao/PromptCoT-Mamba-7B | xl-zhao | 2025-05-29T06:55:49Z | 0 | 1 | null | [
"safetensors",
"arxiv:2505.22425",
"arxiv:2503.02324",
"license:mit",
"region:us"
]
| null | 2025-05-29T06:43:21Z | ---
license: mit
---
# **Scaling Reasoning without Attention**
[](http://arxiv.org/abs/2505.22425)
[](https://github.com/inclusionAI/PromptCoT)
---
## 🚀 Overview
**PromptCoT-Mamba** establishes the first **attention-free foundation model** capable of surpassing strong Transformer baselines across a broad suite of competition-level math and code reasoning tasks. Built on the **Mamba-2** architecture and trained through a structured, two-stage curriculum using the [**PromptCoT**](http://arxiv.org/abs/2503.02324) pipeline, it delivers **high accuracy with constant-memory inference**, eliminating the need for KV caching.
---
## 📈 Key Results
### 🔹 General Performance
| Model | MATH-500 | AIME 24 | AIME 25 | OlympiadBench | HumanEval | HumanEval+ | Livecodebench |
| ---------------------- | -------- | -------- | -------- | ------------- | --------- | ---------- | ------------- |
| **PromptCoT-Mamba-7B** | 84.6 | **35.2** | **24.6** | 50.7 | 81.7 | 75.0 | **29.9** |
| Gemma3-27B | **89.0** | 32.6 | 24.0 | **54.2** | **86.0** | **78.0** | 26.9 |
| Gemma3-12B | 83.8 | 22.9 | 19.2 | 49.9 | 81.1 | 73.2 | 22.2 |
| Sky-T1-7B | 85.0 | 19.2 | 19.2 | 49.2 | 41.5 | 37.2 | 18.3 |
| S1.1-7B | 82.0 | 19.2 | 17.5 | 43.1 | 64.0 | 56.7 | 13.3 |
| Bespoke-Stratos-7B | 81.2 | 18.3 | 16.3 | 45.0 | 73.2 | 68.3 | 8.6 |
| Nemotron-H-8B | 77.6 | -- | -- | -- | 79.3 | 74.4 | -- |
| M1-3B | 81.7 | 23.0 | 22.0 | 43.6 | -- | -- | -- |
> 🔍 **PromptCoT-Mamba-7B** consistently outperforms all 7B-scale Transformer and hybrid Mamba-Transformer baselines across all tasks.
---
### 🔹 Math Specialization vs. Generalist
| Model | MATH-500 | AIME 24 | AIME 25 | OlympiadBench | HumanEval | HumanEval+ | Livecodebench |
| --------------------------- | -------- | -------- | -------- | ------------- | --------- | ---------- | ------------- |
| **PromptCoT-Mamba-Math-7B** | **88.0** | **42.9** | **30.8** | **52.1** | 71.3 | 66.5 | 20.3 |
| PromptCoT-Mamba-7B | 84.6 | 35.2 | 24.6 | 50.7 | **81.7** | **75.0** | **29.9** |
> 🎯 The math-specialized variant improves AIME 24 by **+7.7%** and AIME 25 by **+6.2%**, with a slight trade-off in code-related performance.
---
### ⚡ Inference Efficiency
Using `vLLM` under constrained memory, PromptCoT-Mamba-7B demonstrates substantial speedups over the S1.1-7B Transformer baseline:
* 💡 **3.66× faster** at long-sequence generation on **24GB GPU**
* 💡 **1.69× faster** under **72GB memory**
> ⚙️ Practical for cost-sensitive or long-context inference workloads at scale.
---
## 🧪 Quick Start
### 🔧 Install Requirements
```bash
pip install transformers vllm torch accelerate
```
### 🧠 Load and Run the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "xl-zhao/PromptCoT-Mamba-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")
problem_statement = (
"A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?"
)
prompt = (
f"<|im_start|>user\n{problem_statement}\nPlease reason step by step, and put your final answer within \\boxed{{}}.<|im_end|>\n"
"<|im_start|>assistant\n"
)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
output = model.generate(**inputs, max_length=65536, temperature=0.8)
generated_solution = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_solution)
```
---
## ⚡ Fast Inference with vLLM
```python
from vllm import LLM, SamplingParams
model_name = "xl-zhao/PromptCoT-Mamba-7B"
llm = LLM(model=model_name, tensor_parallel_size=1)
problem_statement = (
"A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?"
)
prompt = (
f"<|im_start|>user\n{problem_statement}\nPlease reason step by step, and put your final answer within \\boxed{{}}.<|im_end|>\n"
"<|im_start|>assistant\n"
)
sampling_params = SamplingParams(temperature=0.8, max_tokens=65536)
outputs = llm.generate([prompt], sampling_params)
print(outputs[0].outputs[0].text)
```
---
## 📜 Citation
```bibtex
@article{zhao2025scaling,
author = {Xueliang Zhao and Wei Wu and Lingpeng Kong},
title = {Scaling Reasoning without Attention},
journal = {arXiv preprint arXiv:2505.22425},
year = {2025},
url = {https://arxiv.org/abs/2505.22425}
}
``` |
HuangXinBa/q-FrozenLake-v1-4x4-noSlippery | HuangXinBa | 2025-05-29T06:54:58Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-29T06:54:04Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="HuangXinBa/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
prashanthbsp/grpo_saved_lora_merged | prashanthbsp | 2025-05-29T06:53:58Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"unsloth",
"trl",
"sft",
"grpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T06:51:42Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
- grpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
while0628/merged_model_WOQ_epoch201 | while0628 | 2025-05-29T06:50:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T06:47:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mnickkk/csm-aika-3 | mnickkk | 2025-05-29T06:49:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"csm",
"text-to-audio",
"text-generation-inference",
"unsloth",
"en",
"base_model:mnickkk/csm",
"base_model:finetune:mnickkk/csm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-to-audio | 2025-05-29T06:47:46Z | ---
base_model: mnickkk/csm
tags:
- text-generation-inference
- transformers
- unsloth
- csm
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mnickkk
- **License:** apache-2.0
- **Finetuned from model :** mnickkk/csm
This csm model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
prashanthbsp/grpo_saved_lora | prashanthbsp | 2025-05-29T06:46:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T06:46:38Z | ---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** prashanthbsp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Firoj112/v1-vits-model | Firoj112 | 2025-05-29T06:46:39Z | 17 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2025-05-29T06:46:06Z | # my_vits_model
## Model Description
A VITS-based TTS model for English speech synthesis
- **Language(s)**: English
- **Type**: Single-speaker Text-to-Speech
- **Model Type**: VITS
- **Framework**: Coqui TTS
- **Uploaded**: 2025-05-29
## Intended Use
- **Primary Use**: Generating single-speaker speech from text input for applications like virtual assistants, audiobooks, or accessibility tools.
- **Out of Scope**: Real-time applications if not optimized for low latency.
## Usage
To load and use the model:
```python
from safetensors.torch import load_file
from TTS.config import load_config
from TTS.tts.models import setup_model
# Load configuration
config = load_config("config.json")
model = setup_model(config)
# Load weights
state_dict = load_file("my_vits_model.safetensors")
model.load_state_dict(state_dict)
model.eval()
# Example inference
text = "Hello, this is a test."
wav = model.inference(text, speaker_id=0 if False else None)
```
## Training Data
- **Dataset**: Custom dataset
- **Preprocessing**: Text normalized, audio sampled at 22050 Hz
## Evaluation
- **Metrics**: [Add metrics, e.g., Mean Opinion Score (MOS), Word Error Rate (WER)]
- **Results**: [Add results, e.g., "Achieved MOS of 4.2 on test set"]
## Limitations
- Limited to English language(s).
- Performance may vary with noisy or complex input text.
-
## License
- Released under apache-2.0.
## Ethical Considerations
- Ensure responsible use to avoid generating misleading or harmful audio content.
- Verify input text to prevent biased or offensive outputs.
## Dependencies
- `TTS` (Coqui TTS)
- `safetensors`
- `torch`
|
gradientrouting-spar/gushing_tofu_mini_steering_weights_seed_1_seed_5 | gradientrouting-spar | 2025-05-29T06:46:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T06:46:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thejaminator/medium_high-medicalnewlinesonly-4e-05-0-mcq0-qwen3_32b | thejaminator | 2025-05-29T06:44:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T06:44:11Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sseul2/bert-smishing-model2 | sseul2 | 2025-05-29T06:44:24Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-29T06:23:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EhDa24/MNLP_M2_quantized_model2 | EhDa24 | 2025-05-29T06:40:26Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-29T06:40:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gsoloupis/Hammer2.1-0.5b-Q8_0-GGUF | gsoloupis | 2025-05-29T06:38:31Z | 1 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:Salesforce/xlam-function-calling-60k",
"dataset:MadeAgents/xlam-irrelevance-7.5k",
"base_model:MadeAgents/Hammer2.1-0.5b",
"base_model:quantized:MadeAgents/Hammer2.1-0.5b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-29T06:38:19Z | ---
license: cc-by-nc-4.0
datasets:
- Salesforce/xlam-function-calling-60k
- MadeAgents/xlam-irrelevance-7.5k
base_model: MadeAgents/Hammer2.1-0.5b
tags:
- llama-cpp
- gguf-my-repo
---
# gsoloupis/Hammer2.1-0.5b-Q8_0-GGUF
This model was converted to GGUF format from [`MadeAgents/Hammer2.1-0.5b`](https://huggingface.co/MadeAgents/Hammer2.1-0.5b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MadeAgents/Hammer2.1-0.5b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo gsoloupis/Hammer2.1-0.5b-Q8_0-GGUF --hf-file hammer2.1-0.5b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo gsoloupis/Hammer2.1-0.5b-Q8_0-GGUF --hf-file hammer2.1-0.5b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo gsoloupis/Hammer2.1-0.5b-Q8_0-GGUF --hf-file hammer2.1-0.5b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo gsoloupis/Hammer2.1-0.5b-Q8_0-GGUF --hf-file hammer2.1-0.5b-q8_0.gguf -c 2048
```
|
samlucky/DeepSeek-R1-Distill-Llama-8B_merged_16bit | samlucky | 2025-05-29T06:35:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T03:02:28Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Paerap/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dappled_chattering_sandpiper | Paerap | 2025-05-29T06:33:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dappled chattering sandpiper",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-13T11:38:46Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dappled_chattering_sandpiper
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am dappled chattering sandpiper
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dappled_chattering_sandpiper
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Paerap/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dappled_chattering_sandpiper", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rahulsharma0999956/Ethicallyborn | rahulsharma0999956 | 2025-05-29T06:31:35Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-29T06:31:33Z | ---
license: apache-2.0
---
|
Junrulu/MemoChat-Vicuna-13B | Junrulu | 2025-05-29T06:21:56Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:Junrulu/MemoChat_Instructions",
"arxiv:2308.08239",
"base_model:lmsys/vicuna-13b-v1.3",
"base_model:finetune:lmsys/vicuna-13b-v1.3",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-23T04:57:49Z | ---
license: cc-by-nc-sa-4.0
model-index:
- name: Junrulu/MemoChat-Vicuna-13B
results: []
datasets:
- Junrulu/MemoChat_Instructions
language:
- en
base_model: lmsys/vicuna-13b-v1.3
---
# Model Card for MemoChat-Vicuna-13B
Our repository: https://github.com/LuJunru/MemoChat.
Our paper: https://arxiv.org/abs/2308.08239. |
mayuri-mishra-viral-video/original.mayuri.mishra.viral.video.highway.viral.mayuri.mishra.viral.full.videos | mayuri-mishra-viral-video | 2025-05-29T06:21:25Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-29T06:20:59Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?ree">🌐 CLICK HERE 🟢==►► WATCH NOW</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?ree">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?ree"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
IvettMeraAmado/rostro-detector-fastapi | IvettMeraAmado | 2025-05-29T06:20:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-29T06:03:53Z | ---
title: Emociones Parcial
emoji: 🐠
colorFrom: blue
colorTo: gray
sdk: docker
pinned: false
license: apache-2.0
---
Trigger restart
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Asap7772/insight-qwen3-1.7b-grpo-0527-step10 | Asap7772 | 2025-05-29T06:19:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T06:17:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmb8ykp8t027g1b1yjodnjrxu | BootesVoid | 2025-05-29T06:19:06Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-29T06:19:05Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CUTEASIANGIRL01
---
# Cmb8Xnjnl01Qr1B1Yn4Ma9Vqf_Cmb8Ykp8T027G1B1Yjodnjrxu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CUTEASIANGIRL01` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CUTEASIANGIRL01",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmb8ykp8t027g1b1yjodnjrxu/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmb8ykp8t027g1b1yjodnjrxu', weight_name='lora.safetensors')
image = pipeline('CUTEASIANGIRL01').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8xnjnl01qr1b1yn4ma9vqf_cmb8ykp8t027g1b1yjodnjrxu/discussions) to add images that show off what you’ve made with this LoRA.
|
microsoft/renderformer-v1.1-swin-large | microsoft | 2025-05-29T06:18:01Z | 0 | 3 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"renderformer",
"arxiv:2505.21925",
"license:mit",
"region:us"
]
| null | 2025-05-15T07:40:29Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- renderformer
license: mit
---
# RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination</h1>
This repo contains the weights of **RenderFormer-V1.1-Large**.
## Quick Start
Please refer to our [Github Repo](https://github.com/microsoft/renderformer) and [Paper](https://arxiv.org/abs/2505.21925).
## Citation
If you find our repository useful, please cite our paper in your work:
```bibtex
@inproceedings {zeng2025renderformer,
title = {RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination},
author = {Chong Zeng and Yue Dong and Pieter Peers and Hongzhi Wu and Xin Tong},
booktitle = {ACM SIGGRAPH 2025 Conference Papers},
year = {2025}
}
``` |
microsoft/renderformer-v1-base | microsoft | 2025-05-29T06:17:32Z | 0 | 4 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"renderformer",
"arxiv:2505.21925",
"license:mit",
"region:us"
]
| null | 2025-05-15T07:35:48Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- renderformer
license: mit
---
# RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination</h1>
This repo contains the weights of **RenderFormer-V1-Base**.
## Quick Start
Please refer to our [Github Repo](https://github.com/microsoft/renderformer) and [Paper](https://arxiv.org/abs/2505.21925).
## Citation
If you find our repository useful, please cite our paper in your work:
```bibtex
@inproceedings {zeng2025renderformer,
title = {RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination},
author = {Chong Zeng and Yue Dong and Pieter Peers and Hongzhi Wu and Xin Tong},
booktitle = {ACM SIGGRAPH 2025 Conference Papers},
year = {2025}
}
``` |
while0628/merged_model_WOQ_epoch161 | while0628 | 2025-05-29T06:15:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T06:12:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leonidas123/valkyrie-lite-0 | leonidas123 | 2025-05-29T06:14:05Z | 193 | 0 | null | [
"safetensors",
"qwen3",
"qwen",
"transplanted-modules",
"gnn",
"ssm",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-19T04:25:01Z | ---
language:
- en
license: apache-2.0
tags:
- qwen
- transplanted-modules
- gnn
- ssm
- safetensors
---
# Valkyrie-Lite: Qwen3-8B with Transplanted Modules
This model is based on Qwen3-8B with specialized modules transplanted into layers 8 and 9:
* **GNN (Graph Neural Network)**: Enhances the model's ability to reason over graph-structured data
* **SSM (State Space Model)**: Improves long-range sequence modeling capabilities
## Model Details
* **Base Model**: Qwen/Qwen3-8B
* **Transplanted Layers**: 8, 9
* **Format**: Safetensors
* **Total Parameters**: ~10.5B parameters
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("leonidas123/valkyrie-lite-0")
model = AutoModelForCausalLM.from_pretrained("leonidas123/valkyrie-lite-0")
# Generate text
inputs = tokenizer("The Valkyrie model can reason about graphs and long sequences because", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Enhanced Capabilities
The transplanted modules provide enhanced capabilities for:
1. **Graph-based reasoning**: The GNN module allows the model to better understand relationships in structured data
2. **Long-range context modeling**: The SSM module improves the model's ability to maintain coherence over longer contexts
## License
This model is released under the Apache 2.0 license.
|
New-tutorial-mayuri-mishra-on-hd/wATCH.FULL.VIDEO.LINK.Mayuri.Mishra.Viral.Video.Leaks.Official | New-tutorial-mayuri-mishra-on-hd | 2025-05-29T06:12:13Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-29T06:11:20Z | <p><a rel="nofollow" href="https://viralflix.xyz/leaked/?ree">🌐 CLICK HERE 🟢==►► WATCH NOW</a></p>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?ree">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?ree"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
rtl-llm/qwen2.5coder-7b-origen-verilog-vhdl-vhdl-truncate | rtl-llm | 2025-05-29T06:11:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T06:07:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CHOOSEIT/MCQA_LoRA_AD_1E | CHOOSEIT | 2025-05-29T06:11:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T06:10:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davgauch/MNLP_M3_mcqa_model_4000 | davgauch | 2025-05-29T06:08:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-28T13:01:36Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
model-index:
- name: MNLP_M3_mcqa_model_4000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M3_mcqa_model_4000
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4000
- total_train_batch_size: 4000
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 1.1976 |
| No log | 2.0 | 10 | 1.0874 |
| No log | 3.0 | 15 | 1.0362 |
| No log | 4.0 | 20 | 1.0038 |
| No log | 5.0 | 25 | 0.9787 |
| No log | 6.0 | 30 | 0.9597 |
| No log | 7.0 | 35 | 0.9439 |
| No log | 8.0 | 40 | 0.9324 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu126
- Datasets 3.2.0
- Tokenizers 0.21.0
|
sergioalves/3da7c92f-bcae-4ccc-bc65-f419c324ace7 | sergioalves | 2025-05-29T06:07:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-29T03:56:45Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3da7c92f-bcae-4ccc-bc65-f419c324ace7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 70dd5e9fd01a2b45_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/3da7c92f-bcae-4ccc-bc65-f419c324ace7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/70dd5e9fd01a2b45_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c62de77-8807-4ee2-b30b-428de90cbcd7
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 0c62de77-8807-4ee2-b30b-428de90cbcd7
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 3da7c92f-bcae-4ccc-bc65-f419c324ace7
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.7248 | 0.0000 | 1 | 2.1146 |
| 1.889 | 0.0072 | 250 | 1.7332 |
| 1.51 | 0.0144 | 500 | 1.6206 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
OrangeMoon0711/mymath-01 | OrangeMoon0711 | 2025-05-29T06:07:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-28T14:20:09Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CS291A/qwen-0.5b__433-enriched_claude3-7__file_level__simple__finetuned | CS291A | 2025-05-29T06:00:17Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"code",
"code-generation",
"instruction-tuning",
"en",
"base_model:secmlr/SWE-BENCH-433-enriched-set-claude-3in1-localization-with-reasoning_qwen_code_0.5b_433_enriched",
"base_model:finetune:secmlr/SWE-BENCH-433-enriched-set-claude-3in1-localization-with-reasoning_qwen_code_0.5b_433_enriched",
"license:other",
"region:us"
]
| null | 2025-05-29T05:59:34Z | ---
language: en
license: other
base_model: secmlr/SWE-BENCH-433-enriched-set-claude-3in1-localization-with-reasoning_qwen_code_0.5b_433_enriched
tags:
- code
- code-generation
- instruction-tuning
---
# Model Description
This model is fine-tuned from secmlr/SWE-BENCH-433-enriched-set-claude-3in1-localization-with-reasoning_qwen_code_0.5b_433_enriched for code generation tasks.
|
CS291A/qwen-0.5b__433-enriched_claude3-7__file_level__simple | CS291A | 2025-05-29T06:00:01Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"code",
"code-generation",
"instruction-tuning",
"en",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:other",
"region:us"
]
| null | 2025-05-29T05:59:17Z | ---
language: en
license: other
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
tags:
- code
- code-generation
- instruction-tuning
---
# Model Description
This model is fine-tuned from Qwen/Qwen2.5-Coder-0.5B-Instruct for code generation tasks.
|
yamero999/chess-board-segmentation-v6 | yamero999 | 2025-05-29T05:55:44Z | 0 | 0 | null | [
"pytorch",
"unet",
"chess",
"segmentation",
"computer-vision",
"image-segmentation",
"license:apache-2.0",
"region:us"
]
| image-segmentation | 2025-05-29T05:51:27Z | ---
license: apache-2.0
tags:
- chess
- segmentation
- unet
- computer-vision
pipeline_tag: image-segmentation
---
# Chess Board Segmentation V6
## Model Description
Breakthrough V6 UNet model for precise chess board segmentation and boundary detection.
## Performance
- **Dice Score**: 0.9391
- **Input Size**: 256x256
- **Architecture**: Custom UNet V6 with 32 base channels
## Usage
```python
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained("yamero999/chess-board-segmentation-v6")
# Process 256x256 RGB images
```
## Training
Trained on diverse chess board images with various angles, lighting conditions, and board styles.
|
dimasik2987/1f1cbe5c-1175-4651-8bc5-f8c21174b097 | dimasik2987 | 2025-05-29T05:52:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-29T03:56:44Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1f1cbe5c-1175-4651-8bc5-f8c21174b097
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 70dd5e9fd01a2b45_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: dimasik2987/1f1cbe5c-1175-4651-8bc5-f8c21174b097
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 12
mixed_precision: bf16
mlflow_experiment_name: /tmp/70dd5e9fd01a2b45_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0c62de77-8807-4ee2-b30b-428de90cbcd7
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 0c62de77-8807-4ee2-b30b-428de90cbcd7
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 1f1cbe5c-1175-4651-8bc5-f8c21174b097
This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.7248 | 0.0000 | 1 | 2.0607 |
| 1.3282 | 0.0072 | 250 | 1.1211 |
| 1.1449 | 0.0144 | 500 | 1.0828 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
angelmmu/llama-8b-it-bitek | angelmmu | 2025-05-29T05:51:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-29T05:48:29Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ericcastillo/adoptrix | ericcastillo | 2025-05-29T05:50:27Z | 0 | 0 | keras | [
"keras",
"joblib",
"es",
"license:mit",
"region:us"
]
| null | 2025-05-29T05:05:24Z | ---
license: mit
language:
- es
--- |
moitruongdothixanhh/thong-cong-nghet | moitruongdothixanhh | 2025-05-29T05:49:21Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-29T05:49:20Z | ---
license: apache-2.0
---
|
Akchunks/Reinforce-PixelCopter | Akchunks | 2025-05-29T05:43:22Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-28T16:42:32Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.80 +/- 16.86
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SoumilB7/Ova-sense | SoumilB7 | 2025-05-29T05:41:54Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-03-22T07:05:24Z | - Model developed by: SoumilB7
- Product: Ova-sense
- Dataset: [HF dataset](https://huggingface.co/datasets/SoumilB7/Ova-sense)
- Architecture: CNN
- Github repo: [Ova-sense](https://github.com/SoumilB7/Ova-sense)
---
license: mit
---
|
keko24/Qwen3-0.6B-SFT-Tulu-MathCodeSciTableWild200k | keko24 | 2025-05-29T05:34:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T05:33:30Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Simje/layoutxlm-finetuned-xfund-fr | Simje | 2025-05-29T05:29:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"base_model:microsoft/layoutxlm-base",
"base_model:finetune:microsoft/layoutxlm-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2025-05-29T03:24:26Z | ---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutxlm-base
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-finetuned-xfund-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-fr
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
torchtorchkimtorch/Llama-3.2-Korean-GGACHI-1B-Instruct-v1 | torchtorchkimtorch | 2025-05-29T05:26:23Z | 914 | 7 | null | [
"safetensors",
"llama",
"ko",
"en",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"region:us"
]
| null | 2024-09-26T12:37:14Z | ---
language:
- ko
- en
base_model:
- meta-llama/Llama-3.2-1B
---
> @ 2024.10.07 Model [torchtorchkimtorch/Llama-3.2-Korean-GGACHI-1B-Instruct-v1](https://huggingface.co/torchtorchkimtorch/Llama-3.2-Korean-GGACHI-1B-Instruct-v1) Released!
> @ 2024.10.18 Performance for KOBEST of Llama-3.2-Korean-GGACHI-1B-Instruct-v1 has been updated!
# **Llama-3.2-Korean-GGACHI-1B-Instruct-v1** #

## 모델 설명 (Model Description)
GGACHI-1B-Instruct-v1는 Llama-3.2-1B-Instruct 모델을 기반으로 하는 한국어 태스크 수행에 최적화된 instruction-tuned 언어 모델입니다. 230,000개 이상의 고품질 한국어 데이터셋을 사용하여 fine-tuning되었습니다.
GGACHI-1B-Instruct-v1 is an instruction-tuned language model optimized for Korean language tasks, based on the Llama-3.2-1B-Instruct model. It has been fine-tuned using over 230,000 high-quality Korean language datasets.
## 모델 성능 (Model Performance)
#### - 0 shot ####
<table style="width:100%; text-align:center; border-collapse:collapse;">
<thead>
<tr>
<th style="border:1px solid black;">Task</th>
<th style="border:1px solid black;">Model</th>
<th style="border:1px solid black;">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_boolq</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;"><strong>0.502</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.502</td>
</tr>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_copa</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;">0.504</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.521</strong></td>
</tr>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_hellaswag</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;">0.358</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.380</td>
</tr>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_sentineg</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;">0.476</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.594</strong></td>
</tr>
</tbody>
</table>
#### - 5 shot ####
<table style="width:100%; text-align:center; border-collapse:collapse;">
<thead>
<tr>
<th style="border:1px solid black;">Task</th>
<th style="border:1px solid black;">Model</th>
<th style="border:1px solid black;">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_boolq</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;"><strong>0.571</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;">0.565</td>
</tr>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_copa</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;">0.526</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.549</strong></td>
</tr>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_hellaswag</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;">0.364</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.398</td>
</tr>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_sentineg</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;">0.725</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.795</strong></td>
</tr>
</tbody>
</table>
#### - 10 shot ####
<table style="width:100%; text-align:center; border-collapse:collapse;">
<thead>
<tr>
<th style="border:1px solid black;">Task</th>
<th style="border:1px solid black;">Model</th>
<th style="border:1px solid black;">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_boolq</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;"><strong>0.593</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;">0.571</td>
</tr>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_copa</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;">0.525</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.549</strong></td>
</tr>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_hellaswag</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;">0.356</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.394</td>
</tr>
<tr>
<td rowspan="2" style="border:1px solid black;">kobest_sentineg</td>
<td style="border:1px solid black;">Llama-3.2-1B-Instruct</td>
<td style="border:1px solid black;">0.768</td>
</tr>
<tr>
<td style="border:1px solid black;"><strong>GGACHI</strong></td>
<td style="border:1px solid black;"><strong>0.821</strong></td>
</tr>
</tbody>
</table>
## Contact
- **김민혁(Minhyuk Kim)**
Mail: [email protected]
LinkedIn : https://www.linkedin.com/in/mhkim0929/ |
100seokyung/qwen-sft-2nd-0529 | 100seokyung | 2025-05-29T05:25:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T05:13:23Z | ---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen-sft-2nd-0529
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen-sft-2nd-0529
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="100seokyung/qwen-sft-2nd-0529", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/braveseokyung-korea-university/qwen2_5-vl-sft-2nd/runs/vvcs61hx)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rtl-llm/qwen2.5coder-7b-translate-chisel-truncate | rtl-llm | 2025-05-29T05:25:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T05:21:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
friendshipkim/Qwen2.5-14B-Instruct-pruned-h1664-i6656-a0.0-d0.0 | friendshipkim | 2025-05-29T05:20:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T05:18:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Aidana2007/SportBot2 | Aidana2007 | 2025-05-29T05:18:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-29T00:55:54Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CompassioninMachineLearning/checkpoint_pretrainedllama8bInstruct11kresearchpapers | CompassioninMachineLearning | 2025-05-29T05:17:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-29T05:13:30Z | ---
base_model: unsloth/llama-3.1-8b-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** CompassioninMachineLearning
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vermoney/084c908e-dd55-4d11-9099-d9b609ba649d | vermoney | 2025-05-29T05:17:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"base_model:adapter:zake7749/gemma-2-2b-it-chinese-kyara-dpo",
"license:gemma",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-29T04:52:56Z | ---
library_name: peft
license: gemma
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 084c908e-dd55-4d11-9099-d9b609ba649d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: zake7749/gemma-2-2b-it-chinese-kyara-dpo
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 822f4ee77c2d1cec_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vermoney/084c908e-dd55-4d11-9099-d9b609ba649d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 280
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/822f4ee77c2d1cec_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 29c810ad-22bc-456d-88cd-8c302ac361b4
wandb_project: s56-9
wandb_run: your_name
wandb_runid: 29c810ad-22bc-456d-88cd-8c302ac361b4
warmup_steps: 40
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 084c908e-dd55-4d11-9099-d9b609ba649d
This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 40
- training_steps: 280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8611 | 0.0094 | 280 | 1.0198 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits