modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-29 12:28:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 457
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-29 12:28:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
shallow6414/sn11-3-12-3 | shallow6414 | 2025-05-26T09:29:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T09:28:59Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
JavaneseHonorifics/Unggah-Ungguh-Javanese-GPT2-Classifier | JavaneseHonorifics | 2025-05-26T09:29:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"jv",
"dataset:JavaneseHonorifics/Unggah-Ungguh",
"arxiv:2502.20864",
"base_model:w11wo/javanese-gpt2-small-imdb",
"base_model:finetune:w11wo/javanese-gpt2-small-imdb",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-26T09:20:29Z | ---
license: cc-by-nc-4.0
language:
- jv
datasets:
- JavaneseHonorifics/Unggah-Ungguh
base_model:
- w11wo/javanese-gpt2-small-imdb
pipeline_tag: text-classification
library_name: transformers
---
# Unggah-Ungguh-Javanese-GPT2-Classifier
Unggah-Ungguh-Javanese-GPT2-Classifier is part of the Unggah-Ungguh's model family, a classifier model for Javanese Honorific Classification task that was mentioned in "Do Language Models Understand Honorific Systems in Javanese?". Check out [our paper](https://arxiv.org/abs/2502.20864) for more information!
## Model description
- **Model type**: A classifier model trained on a highly curated Unggah-Ungguh dataset that represent Javanese Honorific rules and systems.
- **Language(s) NLP**: Javanese
- **License:** CC-BY-NC 4.0
- **Finetuned from model:** w11wo/javanese-distilbert-small-imdb
## Model Sources
- **Project Page:** https://javanesehonorifics.github.io/
- **Repository:** https://github.com/JavaneseHonorifics
- **Paper:** https://arxiv.org/abs/2502.20864
## Using the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_path = "JavaneseHonorifics/Unggah-Ungguh-Javanese-GPT2-Classifier"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
INPUT_TEXT = "Mbak Srini mangan pecel ajange pincuk"
tokenized_input = tokenizer([INPUT_TEXT], return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
outputs = model(**tokenized_input)
y_pred = outputs.logits.argmax(-1)
print("Predicted class:", y_pred.item())
```
## License and Use
Unggah-Ungguh is licensed under the CC-BY-NC 4.0
## Citation
```bibtex
@article{farhansyah2025language,
title={Do Language Models Understand Honorific Systems in Javanese?},
author={Farhansyah, Mohammad Rifqi and Darmawan, Iwan and Kusumawardhana, Adryan and Winata, Genta Indra and Aji, Alham Fikri and Wijaya, Derry Tanti},
journal={arXiv preprint arXiv:2502.20864},
year={2025}
}
``` |
akashmadisetty/fine-tuned-translation-qwen | akashmadisetty | 2025-05-26T09:25:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T09:25:29Z | ---
base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** akashmadisetty
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TanAlexanderlz/ALL_RGBCROP_ori16F-8B16F | TanAlexanderlz | 2025-05-26T09:23:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-26T08:33:29Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ALL_RGBCROP_ori16F-8B16F
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ALL_RGBCROP_ori16F-8B16F
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6047
- Accuracy: 0.8443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 768
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4388 | 0.125 | 96 | 0.4338 | 0.7988 |
| 0.2352 | 1.125 | 192 | 0.6832 | 0.7622 |
| 0.1411 | 2.125 | 288 | 0.8688 | 0.8476 |
| 0.0005 | 3.125 | 384 | 0.9177 | 0.8354 |
| 0.0002 | 4.125 | 480 | 1.0111 | 0.8354 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Darkknight535/Contrl-Stheno-v1-8B | Darkknight535 | 2025-05-26T09:22:58Z | 7 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Sao10K/L3-8B-Stheno-v3.2",
"Delta-Vector/Control-Nanuq-8B",
"conversational",
"en",
"base_model:Delta-Vector/Control-Nanuq-8B",
"base_model:merge:Delta-Vector/Control-Nanuq-8B",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T08:26:12Z | ---
base_model:
- Sao10K/L3-8B-Stheno-v3.2
- Delta-Vector/Control-Nanuq-8B
tags:
- merge
- mergekit
- lazymergekit
- Sao10K/L3-8B-Stheno-v3.2
- Delta-Vector/Control-Nanuq-8B
language:
- en
library_name: transformers
---
<style>
ebody {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #FF69B4 0%, #800080 100%);
color: #FFFFFF;
margin: 0;
padding: 0;
font-size: 16px;
min-height: 100vh;
}
.container {
margin: 20px;
background-color: rgba(28, 14, 36, 0.95);
padding: 20px;
border-radius: 12px;
box-shadow: 0 4px 20px rgba(255, 105, 180, 0.4);
border: 1px solid rgba(255, 105, 180, 0.4);
outline: 1px solid rgba(255, 105, 180, 0.7);
outline-offset: -1px;
position: relative;
backdrop-filter: blur(10px);
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 180, 0.98);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 2s ease-in-out infinite;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.98);
}
50% {
box-shadow: 0 0 20px rgba(255, 105, 180, 0.98);
}
100% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.98);
}
}
.header h1 {
font-size: 28px;
color: #FF69B4;
margin: 0 0 20px 0;
text-shadow: 0 0 15px rgba(255, 105, 180, 0.8);
letter-spacing: 1px;
}
.update-section {
margin-top: 30px;
}
.update-section h2, h2 {
font-size: 24px;
color: #FF69B4;
text-shadow: 0 0 15px rgba(255, 105, 180, 0.8);
letter-spacing: 0.5px;
}
.update-section p {
font-size: 16px;
line-height: 1.6;
color: #FFE1FF;
}
.info p {
color: #FFE1FF;
line-height: 1.6;
font-size: 16px;
}
.info img {
width: 100%;
border-radius: 10px;
margin-bottom: 15px;
box-shadow: 0 0 30px rgba(255, 105, 180, 0.5);
border: 1px solid rgba(255, 105, 180, 0.4);
outline: 1px solid rgba(255, 105, 180, 0.7);
outline-offset: -1px;
transition: transform 0.3s ease, box-shadow 0.3s ease;
}
.info img:hover {
transform: scale(1.01);
box-shadow: 0 0 40px rgba(255, 105, 180, 0.6);
}
a {
color: #00FFEE;
text-decoration: none;
transition: color 0.3s ease;
}
a:hover {
color: #FF1493;
}
.button {
display: inline-block;
background: linear-gradient(45deg, rgba(255, 105, 180, 0.9), rgba(128, 0, 128, 0.9));
color: #FFFFFF;
padding: 12px 24px;
border-radius: 5px;
cursor: pointer;
text-decoration: none;
transition: all 0.3s ease;
border: 1px solid rgba(255, 105, 180, 0.4);
}
.button:hover {
background: linear-gradient(45deg, rgba(255, 105, 180, 1), rgba(128, 0, 128, 1));
box-shadow: 0 0 20px rgba(255, 105, 180, 0.7);
transform: translateY(-2px);
}
pre {
background-color: rgba(28, 14, 36, 0.95);
padding: 15px;
border-radius: 5px;
overflow-x: auto;
border: 1px solid rgba(255, 20, 147, 0.3);
outline: 1px solid rgba(255, 20, 147, 0.6);
outline-offset: -1px;
}
code {
font-family: 'Courier New', monospace;
color: #FFE1FF;
}
.benchmark-container {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 12px;
padding: 20px;
margin: 20px 0;
position: relative;
overflow: hidden;
}
.benchmark-container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 20, 147, 0.98);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 2s ease-in-out infinite;
}
.benchmark-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 15px;
}
.metric-box {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
display: flex;
flex-direction: column;
align-items: center;
text-align: center;
transition: transform 0.3s ease, box-shadow 0.3s ease;
}
.metric-box:hover {
transform: translateY(-2px);
box-shadow: 0 4px 15px rgba(255, 20, 147, 0.3);
}
.metric-box .label {
color: #00FFEE;
font-size: 14px;
margin-bottom: 8px;
font-weight: 500;
}
.metric-box .value {
color: #FFE1FF;
font-size: 18px;
font-weight: 600;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.5);
}
.metrics-section {
margin-bottom: 30px;
}
.metrics-section details {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
margin-bottom: 15px;
}
.metrics-section summary {
color: #FF1493;
font-size: 20px;
cursor: pointer;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
outline: none;
padding: 5px 0;
}
.metrics-section summary::-webkit-details-marker {
display: none;
}
.core-metrics-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 15px;
margin-bottom: 20px;
}
.progress-metrics {
display: grid;
gap: 15px;
}
.progress-metric {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
transition: transform 0.3s ease;
}
.progress-metric:hover {
transform: translateY(-2px);
}
.progress-label {
display: flex;
justify-content: space-between;
margin-bottom: 8px;
color: #00FFEE;
font-size: 14px;
}
.progress-value {
color: #FFE1FF;
}
.progress-bar {
width: 100%;
height: 8px;
background: rgba(0, 0, 0, 0.3);
border: 1px solid rgba(255, 20, 147, 0.15);
border-radius: 4px;
position: relative;
margin: 10px 0;
overflow: hidden;
}
.progress-fill {
height: 100%;
background: linear-gradient(90deg, #FF69B4 0%, #800080 100%);
border-radius: 4px;
transition: width 1s ease-in-out;
box-shadow: 0 0 15px rgba(255, 105, 180, 0.4);
}
.progress-bar.split {
display: flex;
justify-content: center;
background: rgba(0, 0, 0, 0.3);
border: 1px solid rgba(255, 20, 147, 0.15);
overflow: visible;
}
.progress-fill-left {
height: 100%;
position: absolute;
right: 50%;
background: linear-gradient(90deg, #FF69B4 30%, rgba(255, 105, 180, 0.5) 100%);
border-radius: 4px 0 0 4px;
transition: width 0.3s ease-in-out;
}
.progress-fill-right {
height: 100%;
position: absolute;
left: 50%;
background: linear-gradient(90deg, rgba(128, 0, 128, 0.5) 0%, #800080 70%);
border-radius: 0 4px 4px 0;
transition: width 0.3s ease-in-out;
}
.progress-metric.split .progress-bar::before,
.progress-metric.split .progress-bar::after {
content: '';
position: absolute;
width: 2px;
height: 20px;
background: rgba(255, 255, 255, 0.7);
top: 50%;
transform: translateY(-50%);
z-index: 2;
box-shadow: 0 0 8px rgba(255, 255, 255, 0.5);
}
.progress-metric.split .progress-bar::before {
left: 0;
}
.progress-metric.split .progress-bar::after {
right: 0;
}
.progress-metric.split:hover .progress-fill-left {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
.progress-metric.split:hover .progress-fill-right {
box-shadow: 0 0 15px rgba(75, 0, 130, 0.5);
}
.progress-metric.split {
padding: 12px 15px;
}
.progress-metric.split .progress-label {
margin-bottom: 8px;
gap: 12px;
}
.progress-metric.split .progress-label span:first-child,
.progress-metric.split .progress-label span:last-child {
flex: 0 0 80px;
font-size: 14px;
}
.progress-metric.split .progress-value {
font-weight: 600;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
font-size: 14px;
min-width: 60px;
text-align: center;
}
.progress-metric:hover .progress-fill-center {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
.progress-label {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 4px;
color: #00FFEE;
font-size: 14px;
}
.progress-metric:not(.split) .progress-label {
gap: 12px;
}
.progress-metric:not(.split) .progress-label span {
flex: 0 0 auto;
}
.progress-metric:not(.split) .progress-value {
color: #FFE1FF;
}
.progress-metric.split .progress-label {
gap: 20px;
}
.progress-metric.split .progress-label span:first-child,
.progress-metric.split .progress-label span:last-child {
flex: 0 0 80px;
}
.progress-metric.split .progress-label span:first-child {
text-align: right;
}
.progress-metric.split .progress-label span:last-child {
text-align: left;
}
.progress-metric.split .progress-value {
color: #FFE1FF;
flex: 0 0 60px;
text-align: center;
}
.progress-metric:hover .progress-fill {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
.progress-metric:hover .progress-fill-center {
box-shadow: 0 0 15px rgba(75, 0, 130, 0.5);
}
.info-grid {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
}
.creator-section {
margin: 20px 0;
}
.creator-badge {
display: inline-flex;
align-items: center;
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 10px 15px;
}
.creator-label {
color: #FFE1FF;
font-size: 14px;
margin-right: 8px;
}
.creator-link {
display: flex;
align-items: center;
gap: 5px;
color: #00FFEE;
text-decoration: none;
transition: all 0.3s ease;
}
.creator-name {
font-weight: 600;
}
.creator-arrow {
font-size: 16px;
transition: transform 0.3s ease;
}
.creator-link:hover {
color: #FF1493;
}
.creator-link:hover .creator-arrow {
transform: translateX(3px);
}
.model-info {
margin-top: 30px;
}
.name-legend {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 20px;
margin: 20px 0;
}
.name-legend h3 {
color: #FF1493;
font-size: 18px;
margin: 0 0 15px 0;
}
.legend-grid {
display: grid;
gap: 12px;
}
.legend-item {
display: flex;
align-items: baseline;
gap: 10px;
}
.legend-key {
color: #00FFEE;
font-weight: 600;
min-width: 80px;
}
.legend-value {
color: #FFE1FF;
}
.model-description {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 20px;
}
.model-description p {
margin: 0 0 15px 0;
line-height: 1.6;
}
.model-description p:last-child {
margin-bottom: 0;
}
.section-container {
margin: 40px 0;
}
.info-card {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
overflow: hidden;
}
.info-header {
background: rgba(255, 20, 147, 0.1);
padding: 20px;
border-bottom: 1px solid rgba(255, 20, 147, 0.3);
}
.info-header h3 {
color: #FF1493;
margin: 0 0 10px 0;
font-size: 20px;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
}
.model-tags {
display: flex;
gap: 8px;
flex-wrap: wrap;
}
.model-tag {
background: rgba(0, 255, 238, 0.1);
color: #00FFEE;
padding: 4px 8px;
border-radius: 4px;
font-size: 12px;
border: 1px solid rgba(0, 255, 238, 0.2);
}
.model-composition {
padding: 20px;
border-bottom: 1px solid rgba(255, 20, 147, 0.3);
}
.model-composition h4 {
color: #FF1493;
margin: 0 0 15px 0;
font-size: 16px;
}
.composition-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 10px;
}
.composition-list li {
color: #FFE1FF;
display: flex;
align-items: baseline;
gap: 8px;
}
.model-component {
color: #00FFEE;
font-weight: 500;
min-width: 120px;
}
.template-card {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
}
.template-item {
display: flex;
align-items: center;
gap: 12px;
}
.template-icon {
width: 24px;
height: 24px;
opacity: 0.8;
}
.template-content {
display: flex;
align-items: baseline;
gap: 8px;
}
.template-link {
color: #00FFEE;
text-decoration: none;
font-weight: 500;
display: flex;
align-items: center;
gap: 5px;
}
.template-author {
color: rgba(255, 225, 255, 0.7);
font-size: 14px;
}
.quantized-container {
display: grid;
gap: 20px;
}
.quantized-section {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 20px;
}
.quantized-section h3 {
color: #FF1493;
font-size: 18px;
margin: 0 0 15px 0;
}
.quantized-items {
display: grid;
gap: 12px;
}
.quantized-item {
display: flex;
align-items: baseline;
gap: 10px;
}
.quantized-item .author {
color: rgba(255, 225, 255, 0.7);
min-width: 100px;
}
.multi-links {
display: flex;
align-items: center;
gap: 8px;
}
.separator {
color: rgba(255, 225, 255, 0.5);
}
.config-container {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
overflow: hidden;
}
.config-header {
background: rgba(255, 20, 147, 0.1);
padding: 15px 20px;
border-bottom: 1px solid rgba(255, 20, 147, 0.3);
}
.model-name {
color: #FF1493;
font-weight: 600;
}
.config-content {
padding: 20px;
}
.config-item {
display: flex;
flex-direction: column;
gap: 5px;
margin-bottom: 15px;
}
.config-label {
color: #00FFEE;
font-size: 14px;
font-weight: 500;
}
.config-value {
color: #FFE1FF;
font-family: 'Courier New', monospace;
}
.config-models {
margin-top: 20px;
}
.model-list {
list-style: none;
padding: 0;
margin: 10px 0 0 0;
}
.model-list li {
color: #FFE1FF;
font-family: 'Courier New', monospace;
padding: 5px 0;
padding-left: 20px;
position: relative;
}
.model-list li::before {
content: '-';
position: absolute;
left: 0;
color: #00FFEE;
}
.link-arrow {
display: inline-block;
transition: transform 0.3s ease;
}
a:hover .link-arrow {
transform: translateX(3px);
}
.benchmark-notification {
background: rgba(255, 20, 147, 0.15);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
margin-bottom: 20px;
padding: 12px;
animation: glowPulse 2s infinite;
}
.notification-content {
display: flex;
align-items: center;
justify-content: center;
gap: 10px;
text-align: center;
}
.notification-icon {
font-size: 20px;
}
.notification-text {
color: #FFE1FF;
font-size: 16px;
font-weight: 500;
display: flex;
flex-direction: column;
align-items: center;
gap: 5px;
}
.benchmark-link {
color: #00FFEE;
text-decoration: none;
font-size: 14px;
padding: 4px 8px;
border-radius: 4px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 238, 0.3);
}
.benchmark-link:hover {
background: rgba(0, 255, 238, 0.1);
border-color: rgba(0, 255, 238, 0.5);
color: #00FFEE;
text-shadow: 0 0 5px rgba(0, 255, 238, 0.5);
}
@keyframes glowPulse {
0% {
box-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
}
50% {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
}
}
.review-card {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
margin-bottom: 15px;
}
.review-card:last-child {
margin-bottom: 0;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Contrl-Stheno-8B-v1</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<link href="styles.css" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="header">
<h1>Contrl-Stheno-8B-v1</h1>
</div>
<div class="info">
<img src="https://huggingface.co/Darkknight535/Contrl-Stheno-v1-8B/resolve/main/img_.jpg" alt="Model banner">
<div class="creator-section">
<div class="creator-badge">
<span class="creator-label">Created by</span>
<a href="https://huggingface.co/Darkknight535" target="_blank" class="creator-link">
<span class="creator-name">Darkknight535</span>
<span class="creator-arrow">→</span>
</a>
</div>
</div>
<div class="model-info">
<h2>Model Information</h2>
<div class="info-card">
<div class="info-header">
<h3>Contrl-Stheno-8B-v1</h3>
<div class="model-tags">
<span class="model-tag">Stheno = Stheno-v3.2</span>
<span class="model-tag">Contrl = Control-Nanuq</span>
<span class="model-tag">8b Parameters</span>
</div>
</div>
<div class="model-composition">
<h4>Model Composition</h4>
<ul class="composition-list">
<li><span class="model-component"><a href="https://huggingface.co/Delta-Vector/Control-Nanuq-8B" target="_blank">Control Nanuq 8B</a></span> Sweetness and Creativity capabilities</li>
<li><span class="model-component"><a href="https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2" target="_blank">Stheno-3.2 8B</a></span> Roleplay and logic</li>
</ul>
</div>
<div class="model-description">
<p>An Experiment of mine which turned out to be great! It has dialogues I hadn't found even in 70B models.</p>
</div>
</div>
<!--<div class="metrics-section">
<details open>
<summary>User Reviews</summary>
<div class="progress-metrics">
<div>
<div class="review-card">
<div>
<span>[USERNAME]</span>
</div>
<p>[REVIEW]</p>
</div>
<div class="review-card">
<div>
<span>[USERNAME]</span>
</div>
<p>[REVIEW]</p>
</div>
<div class="review-card">
<div>
<span>[USERNAME]</span>
</div>
<p>[REVIEW]</p>
</div>
</div>
</div>
</details>
</div>-->
</div>
<div class="section-container">
<h2>Reccomended Templates & Prompts</h2>
<div class="template-card">
<div class="template-item">
<div class="template-content">
<a href="" target="_blank" class="template-link">
Sao10k's Euryale System Prompt OR EVA System Prompt
<span class="link-arrow">→</span>
</a>
<span class="template-author">by Sao10k and EVA-UNIT-01</span>
</div>
</div>
</div>
</div>
<div class="section-container">
<h2>Quantized Versions</h2>
<div class="quantized-container">
<div class="quantized-section">
<h3>GGUF Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">mradermacher</span>
<a href="https://huggingface.co/mradermacher/Contrl-Stheno-v1-8B-GGUF" target="_blank">
STATIC-GGUF
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
<div class="quantized-section">
<h3>Imat GGUF Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">mradermacher</span>
<a href="https://huggingface.co/mradermacher/Contrl-Stheno-v1-8B-i1-GGUF" target="_blank">
IMAT-GGUF
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
</div>
</div>
<div class="support-section">
<h2>Thanks to these people (I just made a script and Stole SteelSkull's Readme Template)</h2>
<div class="support-buttons">
<a href="https://huggingface.co/Sao10k" target="_blank" class="button">
Support Sao10K
</a>
<a href="https://huggingface.co/Delta-Vector" target="_blank" class="button">
Support Delta-Vector
</a>
<a href="https://huggingface.co/Steelskull" target="_blank" class="button">
Support SteelSkull
</a>
</div>
</div>
</div>
</div>
</body>
</html> |
mradermacher/pythia-6.9b-HC3-GGUF | mradermacher | 2025-05-26T09:22:21Z | 41 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"HC3",
"chatGPT",
"assistant",
"en",
"dataset:pszemraj/HC3-textgen-qa",
"base_model:pszemraj/pythia-6.9b-HC3",
"base_model:quantized:pszemraj/pythia-6.9b-HC3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T07:17:10Z | ---
base_model: pszemraj/pythia-6.9b-HC3
datasets:
- pszemraj/HC3-textgen-qa
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
- HC3
- chatGPT
- assistant
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/pszemraj/pythia-6.9b-HC3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/pythia-6.9b-HC3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q6_K.gguf) | Q6_K | 5.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-6.9b-HC3-GGUF/resolve/main/pythia-6.9b-HC3.f16.gguf) | f16 | 13.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
StanfordAIMI/SRR-BERT2BERT | StanfordAIMI | 2025-05-26T09:21:53Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-03-27T00:11:50Z | ---
library_name: transformers
tags: []
---
## 🎬 Get Started
```python
import torch
from transformers import EncoderDecoderModel, AutoTokenizer, AutoConfig
# step 1: Setup constant
model_name = "StanfordAIMI/SRR-BERT2BERT"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# step 2: Load Processor and Model
model = EncoderDecoderModel.from_pretrained(model_name).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, padding_side="right", use_fast=False)
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.bos_token_id = tokenizer.cls_token_id
model.eval()
# step 3: Inference (example from MIMIC-CXR dataset)
input_text = "CHEST RADIOGRAPH PERFORMED ON ___ COMPARISON: Prior exam from ___. CLINICAL HISTORY: Weakness, assess pneumonia. FINDINGS: Frontal and lateral views of the chest were provided. Midline sternotomy wires are again noted. The heart is poorly assessed, though remains enlarged. There are at least small bilateral pleural effusions. There may be mild interstitial edema. No pneumothorax. Bony structures are demineralized with kyphotic angulation in the lower T-spine again noted. IMPRESSION: Limited exam with small bilateral effusions, cardiomegaly, and possible mild interstitial edema."
inputs = tokenizer(input_text, padding="max_length", truncation=True, max_length=512, return_tensors="pt")
inputs["attention_mask"] = inputs["input_ids"].ne(tokenizer.pad_token_id) # Add attention mask
input_ids = inputs['input_ids'].to(device)
attention_mask=inputs["attention_mask"].to(device)
generated_ids = model.generate(
input_ids, attention_mask=attention_mask, max_new_tokens=286, min_new_tokens= 120,decoder_start_token_id=model.config.decoder_start_token_id, num_beams=5, early_stopping=True, max_length=None
)[0]
decoded = tokenizer.decode(generated_ids, skip_special_tokens=True)
print(decoded)
``` |
Borui-Chan/codeparrot-ds | Borui-Chan | 2025-05-26T09:21:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T08:01:50Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.476 | 0.0931 | 500 | 2.3260 |
| 2.2028 | 0.1861 | 1000 | 2.1672 |
| 2.074 | 0.2792 | 1500 | 2.0317 |
| 1.9791 | 0.3722 | 2000 | 1.9181 |
| 1.8762 | 0.4653 | 2500 | 1.8222 |
| 1.7815 | 0.5583 | 3000 | 1.7436 |
| 1.7088 | 0.6514 | 3500 | 1.6765 |
| 1.6439 | 0.7444 | 4000 | 1.6315 |
| 1.6023 | 0.8375 | 4500 | 1.5997 |
| 1.5815 | 0.9305 | 5000 | 1.5870 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
banhkeomath2/sound | banhkeomath2 | 2025-05-26T09:20:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-07T05:40:54Z | ---
license: apache-2.0
---
|
dimasik87/492f5d86-02df-4ab1-809e-25ff65e925e5 | dimasik87 | 2025-05-26T09:19:31Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:quantized:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-26T08:33:01Z | ---
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
library_name: transformers
model_name: 492f5d86-02df-4ab1-809e-25ff65e925e5
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 492f5d86-02df-4ab1-809e-25ff65e925e5
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik87/492f5d86-02df-4ab1-809e-25ff65e925e5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/b2djyfkn)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
leobianco/npov_RM_model_google_seed_12345_SYN_LLM_false_SYN_STRUCT_true_epochs_3_lr_1e-4_lora_1 | leobianco | 2025-05-26T09:17:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T09:02:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phospho-app/jmota27-gr00t-boat_cup_dataset-npbv8 | phospho-app | 2025-05-26T09:15:29Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-05-26T08:49:06Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [jmota27/boat_cup_dataset](https://huggingface.co/datasets/jmota27/boat_cup_dataset)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
qnguyen3/colqwen2_5-multilingual | qnguyen3 | 2025-05-26T09:14:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"multimodal_embedding",
"multilingual_embedding",
"Text-to-Visual Document (T→VD) retrieval",
"visual-document-retrieval",
"en",
"fr",
"es",
"it",
"de",
"dataset:openbmb/VisRAG-Ret-Train-Synthetic-data",
"dataset:openbmb/VisRAG-Ret-Train-In-domain-data",
"dataset:tsystems/vqa_de_en_batch1",
"dataset:vidore/colpali_train_set",
"dataset:llamaindex/vdr-multilingual-train",
"dataset:Metric-AI/tabfquad_train_set",
"arxiv:2004.12832",
"arxiv:2407.01449",
"arxiv:2106.09685",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] | visual-document-retrieval | 2025-05-26T09:10:18Z | ---
license: mit
datasets:
- openbmb/VisRAG-Ret-Train-Synthetic-data
- openbmb/VisRAG-Ret-Train-In-domain-data
- tsystems/vqa_de_en_batch1
- vidore/colpali_train_set
- llamaindex/vdr-multilingual-train
- Metric-AI/tabfquad_train_set
language:
- en
- fr
- es
- it
- de
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
tags:
- multimodal_embedding
- multilingual_embedding
- Text-to-Visual Document (T→VD) retrieval
library_name: transformers
pipeline_tag: visual-document-retrieval
---
# ColQwen2.5-3b-multilingual-v1.0: Multilingual Visual Retriever based on Qwen2.5-VL-3B-Instruct with ColBERT strategy
### This is the base version trained on 8xH100 80GB with per_device_batch_size=128 for 8 epoch.
ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a [Qwen2.5-VL-3B](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
## Version specificity
This model takes dynamic image resolutions in input and does not resize them, changing their aspect ratio as in ColPali.
Maximal resolution is set so that 768 image patches are created at most. Experiments show clear improvements with larger amounts of image patches, at the cost of memory requirements.
This version is trained with `colpali-engine==0.3.9`.
## Data
- **German & English**: Taken from the `tsystems/vqa_de_en_batch1` dataset.
- **Multilingual dataset**: Taken from `llamaindex/vdr-multilingual-train`.
- **Synthetic data**: Taken from `openbmb/VisRAG-Ret-Train-Synthetic-data` dataset.
- **In-domain VQA dataset**: Taken from `openbmb/VisRAG-Ret-Train-In-domain-data` dataset.
- **Colpali dataset**: Taken from `vidore/colpali_train_set`.
## Model Training
### Parameters
We train models use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
with `alpha=128` and `r=128` on the transformer layers from the language model,
as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
We train on an 8xH100 GPU setup with distributed data parallelism (via accelerate), a learning rate of 2e-4 with linear decay with 1% warmup steps, batch size per device is 128 in `bfloat16` format
## Installation
```bash
pip install git+https://github.com/illuin-tech/colpali
pip install transformers==4.49.0
pip install flash-attn --no-build-isolation
```
## Usage
```python
import torch
from PIL import Image
from colpali_engine.models import ColQwen2_5, ColQwen2_5_Processor
model = ColQwen2_5.from_pretrained(
"tsystems/colqwen2.5-3b-multilingual-v1.0",
torch_dtype=torch.bfloat16,
device_map="cuda:0", # or "mps" if on Apple Silicon
).eval()
processor = ColQwen2_5_Processor.from_pretrained("tsystems/colqwen2.5-3b-multilingual-v1.0")
# Your inputs
images = [
Image.new("RGB", (32, 32), color="white"),
Image.new("RGB", (16, 16), color="black"),
]
queries = [
"Is attention really all you need?",
"What is the amount of bananas farmed in Salvador?",
]
# Process the inputs
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images)
query_embeddings = model(**batch_queries)
scores = processor.score_multi_vector(query_embeddings, image_embeddings)
```
## Limitations
- **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
- **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
## License
ColQwen2.5's vision language backbone model (Qwen2.5-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.
## Citation
If you use this models from this organization in your research, please cite the original paper as follows:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
```
- **Developed by:** [T-Systems International](https://www.t-systems.com/de/en) |
Hyaline/Domaino1s-finance | Hyaline | 2025-05-26T09:13:41Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2501.14431",
"license:apache-2.0",
"region:us"
] | null | 2025-01-19T01:32:02Z | ---
license: apache-2.0
---
This repository stores the model parameters for our paper [Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains](https://arxiv.org/abs/2501.14431).
Our paper is now accepted as findings of **ACL 2025**.
More details: [Domaino1s](https://github.com/Hyalinesky/Domaino1s)
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{chu2025domaino1s,
title={Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains},
author={Chu, Xu and Tan, Zhijie and Xue, Hanlin and Wang, Guanyu and Mo, Tong and Li, Weiping},
journal={arXiv preprint arXiv:2501.14431},
year={2025}
}
``` |
ashani/ppo-SnowballTarget | ashani | 2025-05-26T09:09:56Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-05-26T09:09:52Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ashani/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sp-embraceable/Phi4-FT-unsloth-runpod-3000steps-e1-above90-adapter | sp-embraceable | 2025-05-26T09:08:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/phi-4",
"base_model:adapter:unsloth/phi-4",
"region:us"
] | null | 2025-05-26T09:05:51Z | ---
base_model: unsloth/Phi-4
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
ggml-org/Qwen2.5-Omni-3B-GGUF | ggml-org | 2025-05-26T09:07:29Z | 0 | 0 | null | [
"gguf",
"multimodal",
"any-to-any",
"en",
"base_model:Qwen/Qwen2.5-Omni-3B",
"base_model:quantized:Qwen/Qwen2.5-Omni-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | any-to-any | 2025-05-26T08:52:28Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Omni-3B/blob/main/LICENSE
language:
- en
tags:
- multimodal
pipeline_tag: any-to-any
base_model:
- Qwen/Qwen2.5-Omni-3B
---
# Qwen2.5-Omni-3B-GGUF
Original model: https://huggingface.co/Qwen/Qwen2.5-Omni-3B
Modalities:
- ✅ Text input
- ✅ Audio input
- ✅ Image input
- ❌ Video input
- ❌ Audio generation
Ref PR: https://github.com/ggml-org/llama.cpp/pull/13784
|
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_E4_V2 | ahmedelgebaly | 2025-05-26T09:05:21Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-22T22:08:30Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_E4_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B
lora_model_dir: ahmedelgebaly/llama-3.1-8b-squadv2_E4_V2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
- path: ahmedelgebaly/SQuad_2_Alpaca
type: alpaca
split: train
percentage: 0.1 # small replay buffer to avoid forgetting
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 64 #Before it was 16
lora_dropout: 0.05
lora_target_modules: #Before it was empty
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e4_v2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e4_v2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_E4_V2
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 4
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: true #Before it was false
bf16: auto
tf32: false
gradient_checkpointing: true
flash_attention: true
warmup_steps: 50 #Before it was 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_E4_V2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0029 | 1 | 3.2906 |
| 0.5404 | 0.2504 | 85 | 0.9333 |
| 0.5631 | 0.5007 | 170 | 0.9100 |
| 0.575 | 0.7511 | 255 | 0.8983 |
| 0.5583 | 1.0015 | 340 | 0.8907 |
| 0.3664 | 1.2496 | 425 | 0.9217 |
| 0.38 | 1.5 | 510 | 0.9176 |
| 0.388 | 1.7504 | 595 | 0.9175 |
| 0.3737 | 2.0007 | 680 | 0.9100 |
| 0.2372 | 2.2489 | 765 | 1.0172 |
| 0.2475 | 2.4993 | 850 | 0.9950 |
| 0.2375 | 2.7496 | 935 | 1.0111 |
| 0.2395 | 3.0 | 1020 | 1.0045 |
| 0.1628 | 3.2482 | 1105 | 1.1457 |
| 0.1648 | 3.4985 | 1190 | 1.1547 |
| 0.1625 | 3.7489 | 1275 | 1.1591 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
llmware/phi-4-ov | llmware | 2025-05-26T09:04:20Z | 2 | 0 | null | [
"openvino",
"phi3",
"green",
"llmware-chat",
"p14",
"ov",
"emerald",
"custom_code",
"base_model:microsoft/phi-4",
"base_model:quantized:microsoft/phi-4",
"license:apache-2.0",
"region:us"
] | null | 2025-01-14T21:40:57Z | ---
license: apache-2.0
inference: false
base_model: microsoft/phi-4
base_model_relation: quantized
tags: [green, llmware-chat, p14, ov, emerald]
---
# phi-4-ov
<!-- Provide a quick summary of what the model is/does. -->
**phi-4-ov** is an OpenVino int4 quantized version of [Microsoft Phi-4](https://www.huggingface.co/microsoft/phi-4), providing a fast, small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
### Model Description
- **Developed by:** microsoft
- **Quantized by:** llmware
- **Model type:** phi4
- **Parameters:** 14.7 billion
- **Model Parent:** microsoft/phi-4
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Uses:** Chat, general-purpose LLM
- **Quantization:** int4
## Model Card Contact
[llmware on hf](https://www.huggingface.co/llmware)
[llmware website](https://www.llmware.ai)
|
ai-forever/FRIDA | ai-forever | 2025-05-26T09:04:19Z | 13,940 | 54 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"t5",
"mteb",
"transformers",
"feature-extraction",
"ru",
"en",
"dataset:ai-forever/solyanka",
"arxiv:2309.10931",
"arxiv:2408.12503",
"base_model:ai-forever/FRED-T5-1.7B",
"base_model:finetune:ai-forever/FRED-T5-1.7B",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-12-26T15:07:35Z | ---
model-index:
- name: FRIDA
results:
- dataset:
config: default
name: MTEB CEDRClassification (default)
revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4
split: test
type: ai-forever/cedr-classification
metrics:
- type: accuracy
value: 64.60148777895856
- type: f1
value: 70.36630348039266
- type: lrap
value: 92.47290116896953
- type: main_score
value: 64.60148777895856
task:
type: MultilabelClassification
- dataset:
config: default
name: MTEB GeoreviewClassification (default)
revision: 3765c0d1de6b7d264bc459433c45e5a75513839c
split: test
type: ai-forever/georeview-classification
metrics:
- type: accuracy
value: 57.70996093750001
- type: f1
value: 53.18542982057098
- type: f1_weighted
value: 53.17663229582108
- type: main_score
value: 57.70996093750001
task:
type: Classification
- dataset:
config: default
name: MTEB GeoreviewClusteringP2P (default)
revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec
split: test
type: ai-forever/georeview-clustering-p2p
metrics:
- type: main_score
value: 78.25468393043356
- type: v_measure
value: 78.25468393043356
- type: v_measure_std
value: 0.5094366871364238
task:
type: Clustering
- dataset:
config: default
name: MTEB HeadlineClassification (default)
revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb
split: test
type: ai-forever/headline-classification
metrics:
- type: accuracy
value: 89.0185546875
- type: f1
value: 88.993933120612
- type: f1_weighted
value: 88.99276764225768
- type: main_score
value: 89.0185546875
task:
type: Classification
- dataset:
config: default
name: MTEB InappropriatenessClassification (default)
revision: 601651fdc45ef243751676e62dd7a19f491c0285
split: test
type: ai-forever/inappropriateness-classification
metrics:
- type: accuracy
value: 78.330078125
- type: ap
value: 73.17856750532495
- type: ap_weighted
value: 73.17856750532495
- type: f1
value: 78.20169867599041
- type: f1_weighted
value: 78.20169867599041
- type: main_score
value: 78.330078125
task:
type: Classification
- dataset:
config: default
name: MTEB KinopoiskClassification (default)
revision: 5911f26666ac11af46cb9c6849d0dc80a378af24
split: test
type: ai-forever/kinopoisk-sentiment-classification
metrics:
- type: accuracy
value: 70.46666666666665
- type: f1
value: 65.83951766538878
- type: f1_weighted
value: 65.83951766538878
- type: main_score
value: 70.46666666666665
task:
type: Classification
- dataset:
config: ru
name: MTEB MIRACLReranking (ru)
revision: 6d1962c527217f8927fca80f890f14f36b2802af
split: dev
type: miracl/mmteb-miracl-reranking
metrics:
- type: MAP@1(MIRACL)
value: 39.023
- type: MAP@10(MIRACL)
value: 60.208
- type: MAP@100(MIRACL)
value: 61.672000000000004
- type: MAP@1000(MIRACL)
value: 61.672000000000004
- type: MAP@20(MIRACL)
value: 61.30799999999999
- type: MAP@3(MIRACL)
value: 53.33
- type: MAP@5(MIRACL)
value: 57.289
- type: NDCG@1(MIRACL)
value: 63.352
- type: NDCG@10(MIRACL)
value: 66.042
- type: NDCG@100(MIRACL)
value: 68.702
- type: NDCG@1000(MIRACL)
value: 68.702
- type: NDCG@20(MIRACL)
value: 67.768
- type: NDCG@3(MIRACL)
value: 61.925
- type: NDCG@5(MIRACL)
value: 63.327
- type: P@1(MIRACL)
value: 63.352
- type: P@10(MIRACL)
value: 16.512
- type: P@100(MIRACL)
value: 1.9529999999999998
- type: P@1000(MIRACL)
value: 0.19499999999999998
- type: P@20(MIRACL)
value: 9.13
- type: P@3(MIRACL)
value: 37.878
- type: P@5(MIRACL)
value: 27.586
- type: Recall@1(MIRACL)
value: 39.023
- type: Recall@10(MIRACL)
value: 72.35000000000001
- type: Recall@100(MIRACL)
value: 79.952
- type: Recall@1000(MIRACL)
value: 79.952
- type: Recall@20(MIRACL)
value: 76.828
- type: Recall@3(MIRACL)
value: 57.769999999999996
- type: Recall@5(MIRACL)
value: 64.91900000000001
- type: main_score
value: 66.042
- type: nAUC_MAP@1000_diff1(MIRACL)
value: 27.150388833033052
- type: nAUC_MAP@1000_max(MIRACL)
value: 55.15672274267081
- type: nAUC_MAP@1000_std(MIRACL)
value: 30.088939934575553
- type: nAUC_MAP@100_diff1(MIRACL)
value: 27.150388833033052
- type: nAUC_MAP@100_max(MIRACL)
value: 55.15672274267081
- type: nAUC_MAP@100_std(MIRACL)
value: 30.088939934575553
- type: nAUC_MAP@10_diff1(MIRACL)
value: 27.853691773641742
- type: nAUC_MAP@10_max(MIRACL)
value: 52.89390350055654
- type: nAUC_MAP@10_std(MIRACL)
value: 28.08732516551691
- type: nAUC_MAP@1_diff1(MIRACL)
value: 43.23179150244192
- type: nAUC_MAP@1_max(MIRACL)
value: 29.923943954188864
- type: nAUC_MAP@1_std(MIRACL)
value: 7.447084370195121
- type: nAUC_MAP@20_diff1(MIRACL)
value: 27.328384072311675
- type: nAUC_MAP@20_max(MIRACL)
value: 54.60286379835721
- type: nAUC_MAP@20_std(MIRACL)
value: 29.8084128980043
- type: nAUC_MAP@3_diff1(MIRACL)
value: 31.244971536944554
- type: nAUC_MAP@3_max(MIRACL)
value: 43.63984692803854
- type: nAUC_MAP@3_std(MIRACL)
value: 18.609234683765887
- type: nAUC_MAP@5_diff1(MIRACL)
value: 29.088760492638286
- type: nAUC_MAP@5_max(MIRACL)
value: 48.30474364461509
- type: nAUC_MAP@5_std(MIRACL)
value: 23.817514353844224
- type: nAUC_NDCG@1000_diff1(MIRACL)
value: 23.12754356408408
- type: nAUC_NDCG@1000_max(MIRACL)
value: 64.24894553363303
- type: nAUC_NDCG@1000_std(MIRACL)
value: 38.19318050598967
- type: nAUC_NDCG@100_diff1(MIRACL)
value: 23.12754356408408
- type: nAUC_NDCG@100_max(MIRACL)
value: 64.24894553363303
- type: nAUC_NDCG@100_std(MIRACL)
value: 38.19318050598967
- type: nAUC_NDCG@10_diff1(MIRACL)
value: 24.779856373697275
- type: nAUC_NDCG@10_max(MIRACL)
value: 60.4054459738118
- type: nAUC_NDCG@10_std(MIRACL)
value: 35.148950441182784
- type: nAUC_NDCG@1_diff1(MIRACL)
value: 35.605865569438556
- type: nAUC_NDCG@1_max(MIRACL)
value: 65.77787399715454
- type: nAUC_NDCG@1_std(MIRACL)
value: 34.34726892885082
- type: nAUC_NDCG@20_diff1(MIRACL)
value: 23.71231783125691
- type: nAUC_NDCG@20_max(MIRACL)
value: 62.89676599488004
- type: nAUC_NDCG@20_std(MIRACL)
value: 37.697052941884316
- type: nAUC_NDCG@3_diff1(MIRACL)
value: 26.109027741640865
- type: nAUC_NDCG@3_max(MIRACL)
value: 56.22356793638693
- type: nAUC_NDCG@3_std(MIRACL)
value: 29.9437568508688
- type: nAUC_NDCG@5_diff1(MIRACL)
value: 25.98644715327336
- type: nAUC_NDCG@5_max(MIRACL)
value: 56.25032008404774
- type: nAUC_NDCG@5_std(MIRACL)
value: 31.581899860862578
- type: nAUC_P@1000_diff1(MIRACL)
value: -18.29912787064644
- type: nAUC_P@1000_max(MIRACL)
value: 31.811344878776087
- type: nAUC_P@1000_std(MIRACL)
value: 30.163820183304914
- type: nAUC_P@100_diff1(MIRACL)
value: -18.299127870646405
- type: nAUC_P@100_max(MIRACL)
value: 31.811344878776133
- type: nAUC_P@100_std(MIRACL)
value: 30.163820183304956
- type: nAUC_P@10_diff1(MIRACL)
value: -15.96416268531149
- type: nAUC_P@10_max(MIRACL)
value: 36.989578896466526
- type: nAUC_P@10_std(MIRACL)
value: 34.54507111688143
- type: nAUC_P@1_diff1(MIRACL)
value: 35.605865569438556
- type: nAUC_P@1_max(MIRACL)
value: 65.77787399715454
- type: nAUC_P@1_std(MIRACL)
value: 34.34726892885082
- type: nAUC_P@20_diff1(MIRACL)
value: -17.443963421383287
- type: nAUC_P@20_max(MIRACL)
value: 34.309618168778385
- type: nAUC_P@20_std(MIRACL)
value: 33.38820956485373
- type: nAUC_P@3_diff1(MIRACL)
value: -8.533621861815652
- type: nAUC_P@3_max(MIRACL)
value: 45.90408386776497
- type: nAUC_P@3_std(MIRACL)
value: 34.50459351305535
- type: nAUC_P@5_diff1(MIRACL)
value: -13.207968899314865
- type: nAUC_P@5_max(MIRACL)
value: 40.37718282248973
- type: nAUC_P@5_std(MIRACL)
value: 35.601417332196206
- type: nAUC_Recall@1000_diff1(MIRACL)
value: 7.907304198177226
- type: nAUC_Recall@1000_max(MIRACL)
value: 77.82197832361145
- type: nAUC_Recall@1000_std(MIRACL)
value: 52.66957487246724
- type: nAUC_Recall@100_diff1(MIRACL)
value: 7.907304198177226
- type: nAUC_Recall@100_max(MIRACL)
value: 77.82197832361145
- type: nAUC_Recall@100_std(MIRACL)
value: 52.66957487246724
- type: nAUC_Recall@10_diff1(MIRACL)
value: 15.498121023488693
- type: nAUC_Recall@10_max(MIRACL)
value: 62.24320529338724
- type: nAUC_Recall@10_std(MIRACL)
value: 40.60221460946224
- type: nAUC_Recall@1_diff1(MIRACL)
value: 43.23179150244192
- type: nAUC_Recall@1_max(MIRACL)
value: 29.923943954188864
- type: nAUC_Recall@1_std(MIRACL)
value: 7.447084370195121
- type: nAUC_Recall@20_diff1(MIRACL)
value: 11.457044176116248
- type: nAUC_Recall@20_max(MIRACL)
value: 70.3493054342368
- type: nAUC_Recall@20_std(MIRACL)
value: 49.27124296325928
- type: nAUC_Recall@3_diff1(MIRACL)
value: 25.12077828977941
- type: nAUC_Recall@3_max(MIRACL)
value: 42.903379317937166
- type: nAUC_Recall@3_std(MIRACL)
value: 20.324501722161497
- type: nAUC_Recall@5_diff1(MIRACL)
value: 20.925701235197977
- type: nAUC_Recall@5_max(MIRACL)
value: 49.85323960390812
- type: nAUC_Recall@5_std(MIRACL)
value: 29.04484539530469
task:
type: Reranking
- dataset:
config: ru
name: MTEB MIRACLRetrieval (ru)
revision: main
split: dev
type: miracl/mmteb-miracl
metrics:
- type: main_score
value: 71.882
- type: map_at_1
value: 37.913000000000004
- type: map_at_10
value: 62.604000000000006
- type: map_at_100
value: 64.925
- type: map_at_1000
value: 64.992
- type: map_at_20
value: 64.081
- type: map_at_3
value: 55.212
- type: map_at_5
value: 59.445
- type: mrr_at_1
value: 73.24281150159744
- type: mrr_at_10
value: 81.65043866321825
- type: mrr_at_100
value: 81.85391378818977
- type: mrr_at_1000
value: 81.85753390802569
- type: mrr_at_20
value: 81.81045606130179
- type: mrr_at_3
value: 80.56443024494146
- type: mrr_at_5
value: 81.30724174653893
- type: nauc_map_at_1000_diff1
value: 26.962150235593356
- type: nauc_map_at_1000_max
value: 29.234958037854568
- type: nauc_map_at_1000_std
value: -2.4294465103633884
- type: nauc_map_at_100_diff1
value: 26.92990252114163
- type: nauc_map_at_100_max
value: 29.206328533120118
- type: nauc_map_at_100_std
value: -2.437371090941197
- type: nauc_map_at_10_diff1
value: 25.758265691179226
- type: nauc_map_at_10_max
value: 26.949978490795317
- type: nauc_map_at_10_std
value: -5.484961002106038
- type: nauc_map_at_1_diff1
value: 34.70849461278043
- type: nauc_map_at_1_max
value: 12.778570893623042
- type: nauc_map_at_1_std
value: -13.018292652743938
- type: nauc_map_at_20_diff1
value: 26.659923008218268
- type: nauc_map_at_20_max
value: 28.341440871568185
- type: nauc_map_at_20_std
value: -3.614549844913084
- type: nauc_map_at_3_diff1
value: 27.197629021438203
- type: nauc_map_at_3_max
value: 20.701094874050856
- type: nauc_map_at_3_std
value: -12.062992301112041
- type: nauc_map_at_5_diff1
value: 25.51793537203295
- type: nauc_map_at_5_max
value: 23.80396771243794
- type: nauc_map_at_5_std
value: -8.920465695323575
- type: nauc_mrr_at_1000_diff1
value: 45.14819989592967
- type: nauc_mrr_at_1000_max
value: 53.29202156141053
- type: nauc_mrr_at_1000_std
value: 18.037336462510524
- type: nauc_mrr_at_100_diff1
value: 45.15287600228451
- type: nauc_mrr_at_100_max
value: 53.29979751928615
- type: nauc_mrr_at_100_std
value: 18.04996604778386
- type: nauc_mrr_at_10_diff1
value: 44.96865105944474
- type: nauc_mrr_at_10_max
value: 53.53323465323092
- type: nauc_mrr_at_10_std
value: 18.25001344917689
- type: nauc_mrr_at_1_diff1
value: 46.16604946873163
- type: nauc_mrr_at_1_max
value: 48.573651103547874
- type: nauc_mrr_at_1_std
value: 13.764871626330915
- type: nauc_mrr_at_20_diff1
value: 45.11925458479102
- type: nauc_mrr_at_20_max
value: 53.35685123898342
- type: nauc_mrr_at_20_std
value: 18.127344968819905
- type: nauc_mrr_at_3_diff1
value: 45.377195452730234
- type: nauc_mrr_at_3_max
value: 53.35146309217089
- type: nauc_mrr_at_3_std
value: 17.47105877186237
- type: nauc_mrr_at_5_diff1
value: 45.00525578771549
- type: nauc_mrr_at_5_max
value: 53.76227254707128
- type: nauc_mrr_at_5_std
value: 18.437290060746957
- type: nauc_ndcg_at_1000_diff1
value: 31.19215594457491
- type: nauc_ndcg_at_1000_max
value: 38.09555406458668
- type: nauc_ndcg_at_1000_std
value: 7.225628621238009
- type: nauc_ndcg_at_100_diff1
value: 30.726331247999934
- type: nauc_ndcg_at_100_max
value: 37.81369589418277
- type: nauc_ndcg_at_100_std
value: 7.242855238555071
- type: nauc_ndcg_at_10_diff1
value: 27.514048333744835
- type: nauc_ndcg_at_10_max
value: 33.10990399385253
- type: nauc_ndcg_at_10_std
value: 0.3051899572112002
- type: nauc_ndcg_at_1_diff1
value: 47.06089085235751
- type: nauc_ndcg_at_1_max
value: 47.7300872370495
- type: nauc_ndcg_at_1_std
value: 12.468605493613916
- type: nauc_ndcg_at_20_diff1
value: 29.404215438764496
- type: nauc_ndcg_at_20_max
value: 35.26967886796471
- type: nauc_ndcg_at_20_std
value: 3.7214697890813353
- type: nauc_ndcg_at_3_diff1
value: 29.448848639643067
- type: nauc_ndcg_at_3_max
value: 33.85912412370657
- type: nauc_ndcg_at_3_std
value: 0.895453646819452
- type: nauc_ndcg_at_5_diff1
value: 26.916649012613526
- type: nauc_ndcg_at_5_max
value: 30.899005979291644
- type: nauc_ndcg_at_5_std
value: -1.0001575639156615
- type: nauc_precision_at_1000_diff1
value: -8.492004667432635
- type: nauc_precision_at_1000_max
value: 14.970190384017679
- type: nauc_precision_at_1000_std
value: 32.871386621137816
- type: nauc_precision_at_100_diff1
value: -8.287314133999967
- type: nauc_precision_at_100_max
value: 17.794821961284736
- type: nauc_precision_at_100_std
value: 35.092483550562
- type: nauc_precision_at_10_diff1
value: -7.594128993028063
- type: nauc_precision_at_10_max
value: 24.691446370325732
- type: nauc_precision_at_10_std
value: 30.126552282608493
- type: nauc_precision_at_1_diff1
value: 47.06089085235751
- type: nauc_precision_at_1_max
value: 47.7300872370495
- type: nauc_precision_at_1_std
value: 12.468605493613916
- type: nauc_precision_at_20_diff1
value: -6.503872195775146
- type: nauc_precision_at_20_max
value: 21.789730053141312
- type: nauc_precision_at_20_std
value: 32.61349377558794
- type: nauc_precision_at_3_diff1
value: 0.67417079971061
- type: nauc_precision_at_3_max
value: 30.793871354370662
- type: nauc_precision_at_3_std
value: 18.35266479252011
- type: nauc_precision_at_5_diff1
value: -7.088881730215777
- type: nauc_precision_at_5_max
value: 26.539771712769006
- type: nauc_precision_at_5_std
value: 24.116262291865834
- type: nauc_recall_at_1000_diff1
value: 34.53263588412461
- type: nauc_recall_at_1000_max
value: 63.54157869100173
- type: nauc_recall_at_1000_std
value: 64.19854844792808
- type: nauc_recall_at_100_diff1
value: 22.86564728642275
- type: nauc_recall_at_100_max
value: 40.350507162549825
- type: nauc_recall_at_100_std
value: 29.24492545863015
- type: nauc_recall_at_10_diff1
value: 15.384818367225009
- type: nauc_recall_at_10_max
value: 24.41108571453699
- type: nauc_recall_at_10_std
value: -3.9216160585776323
- type: nauc_recall_at_1_diff1
value: 34.70849461278043
- type: nauc_recall_at_1_max
value: 12.778570893623042
- type: nauc_recall_at_1_std
value: -13.018292652743938
- type: nauc_recall_at_20_diff1
value: 18.122499000084208
- type: nauc_recall_at_20_max
value: 26.63104220179424
- type: nauc_recall_at_20_std
value: 3.969217732521512
- type: nauc_recall_at_3_diff1
value: 21.413050725250116
- type: nauc_recall_at_3_max
value: 16.18894988386887
- type: nauc_recall_at_3_std
value: -15.24884339282375
- type: nauc_recall_at_5_diff1
value: 16.35673072212927
- type: nauc_recall_at_5_max
value: 18.607003829267846
- type: nauc_recall_at_5_std
value: -10.463525876945454
- type: ndcg_at_1
value: 72.923
- type: ndcg_at_10
value: 71.882
- type: ndcg_at_100
value: 77.09899999999999
- type: ndcg_at_1000
value: 77.835
- type: ndcg_at_20
value: 74.497
- type: ndcg_at_3
value: 68.504
- type: ndcg_at_5
value: 69.068
- type: precision_at_1
value: 72.923
- type: precision_at_10
value: 19.936
- type: precision_at_100
value: 2.6310000000000002
- type: precision_at_1000
value: 0.27799999999999997
- type: precision_at_20
value: 11.33
- type: precision_at_3
value: 45.927
- type: precision_at_5
value: 33.131
- type: recall_at_1
value: 37.913000000000004
- type: recall_at_10
value: 78.365
- type: recall_at_100
value: 94.348
- type: recall_at_1000
value: 98.187
- type: recall_at_20
value: 85.229
- type: recall_at_3
value: 61.42999999999999
- type: recall_at_5
value: 69.56700000000001
task:
type: Retrieval
- dataset:
config: ru
name: MTEB MassiveIntentClassification (ru)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 79.11903160726294
- type: f1
value: 76.22609082694545
- type: f1_weighted
value: 77.81461248063566
- type: main_score
value: 79.11903160726294
task:
type: Classification
- dataset:
config: ru
name: MTEB MassiveScenarioClassification (ru)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 88.80632145258912
- type: f1
value: 87.53157475314829
- type: f1_weighted
value: 88.22733432521495
- type: main_score
value: 88.80632145258912
task:
type: Classification
- dataset:
config: default
name: MTEB RUParaPhraserSTS (default)
revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4
split: test
type: merionum/ru_paraphraser
metrics:
- type: cosine_pearson
value: 72.70307124858925
- type: cosine_spearman
value: 78.09439086920204
- type: euclidean_pearson
value: 76.2033672014715
- type: euclidean_spearman
value: 78.09439086920204
- type: main_score
value: 78.09439086920204
- type: manhattan_pearson
value: 76.11750470223116
- type: manhattan_spearman
value: 78.01081063503413
- type: pearson
value: 72.70307124858925
- type: spearman
value: 78.09439086920204
task:
type: STS
- dataset:
config: default
name: MTEB RiaNewsRetrieval (default)
revision: 82374b0bbacda6114f39ff9c5b925fa1512ca5d7
split: test
type: ai-forever/ria-news-retrieval
metrics:
- type: main_score
value: 86.819
- type: map_at_1
value: 78.79
- type: map_at_10
value: 84.516
- type: map_at_100
value: 84.68
- type: map_at_1000
value: 84.685
- type: map_at_20
value: 84.624
- type: map_at_3
value: 83.722
- type: map_at_5
value: 84.246
- type: mrr_at_1
value: 78.78
- type: mrr_at_10
value: 84.51815476190441
- type: mrr_at_100
value: 84.68390840473289
- type: mrr_at_1000
value: 84.68947095200002
- type: mrr_at_20
value: 84.62958130822527
- type: mrr_at_3
value: 83.74499999999964
- type: mrr_at_5
value: 84.23849999999955
- type: nauc_map_at_1000_diff1
value: 82.09914867708899
- type: nauc_map_at_1000_max
value: 43.02024854784386
- type: nauc_map_at_1000_std
value: -22.919695880762777
- type: nauc_map_at_100_diff1
value: 82.09705922783733
- type: nauc_map_at_100_max
value: 43.02697379581718
- type: nauc_map_at_100_std
value: -22.90719212899522
- type: nauc_map_at_10_diff1
value: 82.04404594672894
- type: nauc_map_at_10_max
value: 43.06752103182731
- type: nauc_map_at_10_std
value: -23.007870153273576
- type: nauc_map_at_1_diff1
value: 83.89134152210333
- type: nauc_map_at_1_max
value: 38.083626428503415
- type: nauc_map_at_1_std
value: -25.817960401194252
- type: nauc_map_at_20_diff1
value: 82.08534662247806
- type: nauc_map_at_20_max
value: 43.074305042312346
- type: nauc_map_at_20_std
value: -22.91785703613217
- type: nauc_map_at_3_diff1
value: 81.7967508697558
- type: nauc_map_at_3_max
value: 42.90927479098251
- type: nauc_map_at_3_std
value: -24.01312203859392
- type: nauc_map_at_5_diff1
value: 81.90704517505098
- type: nauc_map_at_5_max
value: 43.05204677044616
- type: nauc_map_at_5_std
value: -23.267331507554896
- type: nauc_mrr_at_1000_diff1
value: 82.11902348082472
- type: nauc_mrr_at_1000_max
value: 43.04118936353063
- type: nauc_mrr_at_1000_std
value: -22.858804296830773
- type: nauc_mrr_at_100_diff1
value: 82.11685562002263
- type: nauc_mrr_at_100_max
value: 43.0482537895494
- type: nauc_mrr_at_100_std
value: -22.84431127787993
- type: nauc_mrr_at_10_diff1
value: 82.06909958688058
- type: nauc_mrr_at_10_max
value: 43.07921689466605
- type: nauc_mrr_at_10_std
value: -22.957623576663234
- type: nauc_mrr_at_1_diff1
value: 83.91147637794326
- type: nauc_mrr_at_1_max
value: 37.91917159543152
- type: nauc_mrr_at_1_std
value: -26.141868289283266
- type: nauc_mrr_at_20_diff1
value: 82.10314004731809
- type: nauc_mrr_at_20_max
value: 43.09295406509764
- type: nauc_mrr_at_20_std
value: -22.862091782178787
- type: nauc_mrr_at_3_diff1
value: 81.82117067269036
- type: nauc_mrr_at_3_max
value: 42.94628953323521
- type: nauc_mrr_at_3_std
value: -23.852510312400714
- type: nauc_mrr_at_5_diff1
value: 81.92857441701598
- type: nauc_mrr_at_5_max
value: 43.129719354492934
- type: nauc_mrr_at_5_std
value: -23.145342272624085
- type: nauc_ndcg_at_1000_diff1
value: 81.75015729717991
- type: nauc_ndcg_at_1000_max
value: 44.7266586308995
- type: nauc_ndcg_at_1000_std
value: -20.60663899715267
- type: nauc_ndcg_at_100_diff1
value: 81.6897808298767
- type: nauc_ndcg_at_100_max
value: 44.99492791287099
- type: nauc_ndcg_at_100_std
value: -20.09637266506936
- type: nauc_ndcg_at_10_diff1
value: 81.46290312197337
- type: nauc_ndcg_at_10_max
value: 45.30218378452244
- type: nauc_ndcg_at_10_std
value: -20.70393523891777
- type: nauc_ndcg_at_1_diff1
value: 83.89134152210333
- type: nauc_ndcg_at_1_max
value: 38.083626428503415
- type: nauc_ndcg_at_1_std
value: -25.817960401194252
- type: nauc_ndcg_at_20_diff1
value: 81.61080772657213
- type: nauc_ndcg_at_20_max
value: 45.36571800492172
- type: nauc_ndcg_at_20_std
value: -20.278763852504042
- type: nauc_ndcg_at_3_diff1
value: 80.95965359410461
- type: nauc_ndcg_at_3_max
value: 44.756971949205834
- type: nauc_ndcg_at_3_std
value: -23.07797617717319
- type: nauc_ndcg_at_5_diff1
value: 81.12417712163976
- type: nauc_ndcg_at_5_max
value: 45.15727381406512
- type: nauc_ndcg_at_5_std
value: -21.52861766165519
- type: nauc_precision_at_1000_diff1
value: 76.80566850396093
- type: nauc_precision_at_1000_max
value: 82.45685370922442
- type: nauc_precision_at_1000_std
value: 46.93570976777808
- type: nauc_precision_at_100_diff1
value: 77.21645520953484
- type: nauc_precision_at_100_max
value: 73.43604108309935
- type: nauc_precision_at_100_std
value: 31.978176891671367
- type: nauc_precision_at_10_diff1
value: 77.88251664302092
- type: nauc_precision_at_10_max
value: 60.58112638995018
- type: nauc_precision_at_10_std
value: -3.674424315180332
- type: nauc_precision_at_1_diff1
value: 83.89134152210333
- type: nauc_precision_at_1_max
value: 38.083626428503415
- type: nauc_precision_at_1_std
value: -25.817960401194252
- type: nauc_precision_at_20_diff1
value: 78.16426786697438
- type: nauc_precision_at_20_max
value: 66.0723612699222
- type: nauc_precision_at_20_std
value: 6.121527084555938
- type: nauc_precision_at_3_diff1
value: 77.43122492166451
- type: nauc_precision_at_3_max
value: 52.50727288548085
- type: nauc_precision_at_3_std
value: -19.036076920799427
- type: nauc_precision_at_5_diff1
value: 77.1127254320532
- type: nauc_precision_at_5_max
value: 56.100901899221135
- type: nauc_precision_at_5_std
value: -12.009191140844198
- type: nauc_recall_at_1000_diff1
value: 76.80566850396035
- type: nauc_recall_at_1000_max
value: 82.45685370922577
- type: nauc_recall_at_1000_std
value: 46.93570976777776
- type: nauc_recall_at_100_diff1
value: 77.21645520953459
- type: nauc_recall_at_100_max
value: 73.43604108310011
- type: nauc_recall_at_100_std
value: 31.978176891671993
- type: nauc_recall_at_10_diff1
value: 77.88251664302089
- type: nauc_recall_at_10_max
value: 60.58112638994999
- type: nauc_recall_at_10_std
value: -3.6744243151805427
- type: nauc_recall_at_1_diff1
value: 83.89134152210333
- type: nauc_recall_at_1_max
value: 38.083626428503415
- type: nauc_recall_at_1_std
value: -25.817960401194252
- type: nauc_recall_at_20_diff1
value: 78.16426786697409
- type: nauc_recall_at_20_max
value: 66.07236126992217
- type: nauc_recall_at_20_std
value: 6.121527084555941
- type: nauc_recall_at_3_diff1
value: 77.43122492166454
- type: nauc_recall_at_3_max
value: 52.507272885480816
- type: nauc_recall_at_3_std
value: -19.036076920799776
- type: nauc_recall_at_5_diff1
value: 77.11272543205318
- type: nauc_recall_at_5_max
value: 56.10090189922128
- type: nauc_recall_at_5_std
value: -12.009191140843809
- type: ndcg_at_1
value: 78.79
- type: ndcg_at_10
value: 86.819
- type: ndcg_at_100
value: 87.599
- type: ndcg_at_1000
value: 87.761
- type: ndcg_at_20
value: 87.208
- type: ndcg_at_3
value: 85.222
- type: ndcg_at_5
value: 86.164
- type: precision_at_1
value: 78.79
- type: precision_at_10
value: 9.384
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 4.769
- type: precision_at_3
value: 29.842999999999996
- type: precision_at_5
value: 18.362000000000002
- type: recall_at_1
value: 78.79
- type: recall_at_10
value: 93.84
- type: recall_at_100
value: 97.45
- type: recall_at_1000
value: 98.76
- type: recall_at_20
value: 95.37
- type: recall_at_3
value: 89.53
- type: recall_at_5
value: 91.81
task:
type: Retrieval
- dataset:
config: default
name: MTEB RuBQReranking (default)
revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2
split: test
type: ai-forever/rubq-reranking
metrics:
- type: main_score
value: 77.07394404835635
- type: map
value: 77.07394404835635
- type: mrr
value: 82.53144412718882
- type: nAUC_map_diff1
value: 45.29805217456628
- type: nAUC_map_max
value: 34.39894042439188
- type: nAUC_map_std
value: 21.11309674418275
- type: nAUC_mrr_diff1
value: 54.783994737367046
- type: nAUC_mrr_max
value: 45.68526733900048
- type: nAUC_mrr_std
value: 28.22466385500339
task:
type: Reranking
- dataset:
config: default
name: MTEB RuBQRetrieval (default)
revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b
split: test
type: ai-forever/rubq-retrieval
metrics:
- type: main_score
value: 72.392
- type: map_at_1
value: 47.370000000000005
- type: map_at_10
value: 65.503
- type: map_at_100
value: 66.38
- type: map_at_1000
value: 66.42099999999999
- type: map_at_20
value: 66.071
- type: map_at_3
value: 61.439
- type: map_at_5
value: 63.922999999999995
- type: mrr_at_1
value: 67.37588652482269
- type: mrr_at_10
value: 76.0066747345116
- type: mrr_at_100
value: 76.25754138969413
- type: mrr_at_1000
value: 76.26968825657428
- type: mrr_at_20
value: 76.17548265904622
- type: mrr_at_3
value: 74.61583924349881
- type: mrr_at_5
value: 75.46690307328608
- type: nauc_map_at_1000_diff1
value: 42.52570720187294
- type: nauc_map_at_1000_max
value: 37.40318318724238
- type: nauc_map_at_1000_std
value: 0.6037788201535506
- type: nauc_map_at_100_diff1
value: 42.493410029691226
- type: nauc_map_at_100_max
value: 37.39802489244377
- type: nauc_map_at_100_std
value: 0.6071359951887154
- type: nauc_map_at_10_diff1
value: 42.09833519659916
- type: nauc_map_at_10_max
value: 37.1184138958874
- type: nauc_map_at_10_std
value: 0.4063543094010351
- type: nauc_map_at_1_diff1
value: 49.56605205141156
- type: nauc_map_at_1_max
value: 26.251096698710384
- type: nauc_map_at_1_std
value: -4.580748485387834
- type: nauc_map_at_20_diff1
value: 42.33372393482018
- type: nauc_map_at_20_max
value: 37.416955604649985
- type: nauc_map_at_20_std
value: 0.6050577802787294
- type: nauc_map_at_3_diff1
value: 42.362234475441845
- type: nauc_map_at_3_max
value: 34.56001379838821
- type: nauc_map_at_3_std
value: -1.507636598929042
- type: nauc_map_at_5_diff1
value: 42.0202264882535
- type: nauc_map_at_5_max
value: 36.64306050200848
- type: nauc_map_at_5_std
value: -0.09509025708798424
- type: nauc_mrr_at_1000_diff1
value: 58.99601742026931
- type: nauc_mrr_at_1000_max
value: 49.61561872452777
- type: nauc_mrr_at_1000_std
value: 2.3956102974352356
- type: nauc_mrr_at_100_diff1
value: 58.9865943101085
- type: nauc_mrr_at_100_max
value: 49.6248111507265
- type: nauc_mrr_at_100_std
value: 2.411155095066369
- type: nauc_mrr_at_10_diff1
value: 58.81758131092919
- type: nauc_mrr_at_10_max
value: 49.780365572616695
- type: nauc_mrr_at_10_std
value: 2.7068696565195944
- type: nauc_mrr_at_1_diff1
value: 61.67036882487055
- type: nauc_mrr_at_1_max
value: 45.455271042821714
- type: nauc_mrr_at_1_std
value: -0.9370526815458349
- type: nauc_mrr_at_20_diff1
value: 58.93674818203478
- type: nauc_mrr_at_20_max
value: 49.703218108625215
- type: nauc_mrr_at_20_std
value: 2.4473106598190415
- type: nauc_mrr_at_3_diff1
value: 59.046856598788445
- type: nauc_mrr_at_3_max
value: 49.37161726123392
- type: nauc_mrr_at_3_std
value: 1.5110936686701506
- type: nauc_mrr_at_5_diff1
value: 58.92289378915668
- type: nauc_mrr_at_5_max
value: 49.847638994134144
- type: nauc_mrr_at_5_std
value: 2.420421880131702
- type: nauc_ndcg_at_1000_diff1
value: 45.56062215161734
- type: nauc_ndcg_at_1000_max
value: 41.507152286702
- type: nauc_ndcg_at_1000_std
value: 2.79388283208751
- type: nauc_ndcg_at_100_diff1
value: 44.84064192570408
- type: nauc_ndcg_at_100_max
value: 41.50353573562353
- type: nauc_ndcg_at_100_std
value: 3.1804999773629357
- type: nauc_ndcg_at_10_diff1
value: 43.341482144213614
- type: nauc_ndcg_at_10_max
value: 41.159590898395074
- type: nauc_ndcg_at_10_std
value: 2.945242338240843
- type: nauc_ndcg_at_1_diff1
value: 62.23623985611396
- type: nauc_ndcg_at_1_max
value: 45.04945770947091
- type: nauc_ndcg_at_1_std
value: -0.8804967656575725
- type: nauc_ndcg_at_20_diff1
value: 43.905372612093664
- type: nauc_ndcg_at_20_max
value: 41.797709837872446
- type: nauc_ndcg_at_20_std
value: 3.1853356915569653
- type: nauc_ndcg_at_3_diff1
value: 44.18163998834299
- type: nauc_ndcg_at_3_max
value: 38.352891017864636
- type: nauc_ndcg_at_3_std
value: -0.8235767021150929
- type: nauc_ndcg_at_5_diff1
value: 43.41374688421302
- type: nauc_ndcg_at_5_max
value: 40.390365601593956
- type: nauc_ndcg_at_5_std
value: 1.6743650108127537
- type: nauc_precision_at_1000_diff1
value: -9.711058370691381
- type: nauc_precision_at_1000_max
value: 6.97321343449286
- type: nauc_precision_at_1000_std
value: 7.933531916622121
- type: nauc_precision_at_100_diff1
value: -8.247029644152319
- type: nauc_precision_at_100_max
value: 10.86740140944616
- type: nauc_precision_at_100_std
value: 9.581885544675918
- type: nauc_precision_at_10_diff1
value: -2.409043695429943
- type: nauc_precision_at_10_max
value: 21.04733206074314
- type: nauc_precision_at_10_std
value: 10.03334651647101
- type: nauc_precision_at_1_diff1
value: 62.23623985611396
- type: nauc_precision_at_1_max
value: 45.04945770947091
- type: nauc_precision_at_1_std
value: -0.8804967656575725
- type: nauc_precision_at_20_diff1
value: -5.230303656931621
- type: nauc_precision_at_20_max
value: 17.77799716919181
- type: nauc_precision_at_20_std
value: 10.739127998618654
- type: nauc_precision_at_3_diff1
value: 10.40376424999862
- type: nauc_precision_at_3_max
value: 30.933333400254035
- type: nauc_precision_at_3_std
value: 6.126209127968004
- type: nauc_precision_at_5_diff1
value: 3.147398101830739
- type: nauc_precision_at_5_max
value: 27.1746309955971
- type: nauc_precision_at_5_std
value: 8.874723615388788
- type: nauc_recall_at_1000_diff1
value: 5.055940692380908
- type: nauc_recall_at_1000_max
value: 22.42031123370267
- type: nauc_recall_at_1000_std
value: 27.75476692527869
- type: nauc_recall_at_100_diff1
value: 17.86391178198642
- type: nauc_recall_at_100_max
value: 34.776134863678955
- type: nauc_recall_at_100_std
value: 18.96377158778504
- type: nauc_recall_at_10_diff1
value: 24.863097695413597
- type: nauc_recall_at_10_max
value: 37.697411651507444
- type: nauc_recall_at_10_std
value: 9.519849994253967
- type: nauc_recall_at_1_diff1
value: 49.56605205141156
- type: nauc_recall_at_1_max
value: 26.251096698710384
- type: nauc_recall_at_1_std
value: -4.580748485387834
- type: nauc_recall_at_20_diff1
value: 22.440602811005636
- type: nauc_recall_at_20_max
value: 39.538861316515
- type: nauc_recall_at_20_std
value: 11.363269553121468
- type: nauc_recall_at_3_diff1
value: 32.80302839873736
- type: nauc_recall_at_3_max
value: 32.53105685012729
- type: nauc_recall_at_3_std
value: -0.7140166410605693
- type: nauc_recall_at_5_diff1
value: 29.375386639154865
- type: nauc_recall_at_5_max
value: 36.91045781164083
- type: nauc_recall_at_5_std
value: 4.725419050262578
- type: ndcg_at_1
value: 67.13900000000001
- type: ndcg_at_10
value: 72.392
- type: ndcg_at_100
value: 75.25800000000001
- type: ndcg_at_1000
value: 75.982
- type: ndcg_at_20
value: 73.783
- type: ndcg_at_3
value: 67.269
- type: ndcg_at_5
value: 69.807
- type: precision_at_1
value: 67.13900000000001
- type: precision_at_10
value: 13.327
- type: precision_at_100
value: 1.5559999999999998
- type: precision_at_1000
value: 0.164
- type: precision_at_20
value: 7.119000000000001
- type: precision_at_3
value: 35.599
- type: precision_at_5
value: 23.936
- type: recall_at_1
value: 47.370000000000005
- type: recall_at_10
value: 82.16
- type: recall_at_100
value: 93.34
- type: recall_at_1000
value: 98.202
- type: recall_at_20
value: 86.687
- type: recall_at_3
value: 69.319
- type: recall_at_5
value: 75.637
task:
type: Retrieval
- dataset:
config: default
name: MTEB RuReviewsClassification (default)
revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a
split: test
type: ai-forever/ru-reviews-classification
metrics:
- type: accuracy
value: 75.0537109375
- type: f1
value: 74.00523205209554
- type: f1_weighted
value: 74.00436782840376
- type: main_score
value: 75.0537109375
task:
type: Classification
- dataset:
config: default
name: MTEB RuSTSBenchmarkSTS (default)
revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018
split: test
type: ai-forever/ru-stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 81.10255413476487
- type: cosine_spearman
value: 81.40020843157141
- type: euclidean_pearson
value: 81.25155479902466
- type: euclidean_spearman
value: 81.40020831064922
- type: main_score
value: 81.40020843157141
- type: manhattan_pearson
value: 81.1493715249014
- type: manhattan_spearman
value: 81.30973667941649
- type: pearson
value: 81.10255413476487
- type: spearman
value: 81.40020843157141
task:
type: STS
- dataset:
config: default
name: MTEB RuSciBenchGRNTIClassification (default)
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
split: test
type: ai-forever/ru-scibench-grnti-classification
metrics:
- type: accuracy
value: 69.8974609375
- type: f1
value: 68.57837564785511
- type: f1_weighted
value: 68.59030489460784
- type: main_score
value: 69.8974609375
task:
type: Classification
- dataset:
config: default
name: MTEB RuSciBenchGRNTIClusteringP2P (default)
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
split: test
type: ai-forever/ru-scibench-grnti-classification
metrics:
- type: main_score
value: 67.03880348548029
- type: v_measure
value: 67.03880348548029
- type: v_measure_std
value: 0.6126278133139618
task:
type: Clustering
- dataset:
config: default
name: MTEB RuSciBenchOECDClassification (default)
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
split: test
type: ai-forever/ru-scibench-oecd-classification
metrics:
- type: accuracy
value: 54.63378906250001
- type: f1
value: 51.34306420274629
- type: f1_weighted
value: 51.33495867493914
- type: main_score
value: 54.63378906250001
task:
type: Classification
- dataset:
config: default
name: MTEB RuSciBenchOECDClusteringP2P (default)
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
split: test
type: ai-forever/ru-scibench-oecd-classification
metrics:
- type: main_score
value: 56.55947121159027
- type: v_measure
value: 56.55947121159027
- type: v_measure_std
value: 0.5498882006880662
task:
type: Clustering
- dataset:
config: ru
name: MTEB STS22 (ru)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 61.833294921667914
- type: cosine_spearman
value: 63.53967536726357
- type: euclidean_pearson
value: 60.382865218855805
- type: euclidean_spearman
value: 63.53967536726357
- type: main_score
value: 63.53967536726357
- type: manhattan_pearson
value: 60.24879015304578
- type: manhattan_spearman
value: 63.42305760430092
- type: pearson
value: 61.833294921667914
- type: spearman
value: 63.53967536726357
task:
type: STS
- dataset:
config: default
name: MTEB SensitiveTopicsClassification (default)
revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2
split: test
type: ai-forever/sensitive-topics-classification
metrics:
- type: accuracy
value: 39.8193359375
- type: f1
value: 55.46591740935434
- type: lrap
value: 66.50980631510454
- type: main_score
value: 39.8193359375
task:
type: MultilabelClassification
- dataset:
config: default
name: MTEB TERRa (default)
revision: 7b58f24536063837d644aab9a023c62199b2a612
split: dev
type: ai-forever/terra-pairclassification
metrics:
- type: cosine_accuracy
value: 66.77524429967427
- type: cosine_accuracy_threshold
value: 55.58975338935852
- type: cosine_ap
value: 66.4567219323658
- type: cosine_f1
value: 70.64676616915423
- type: cosine_f1_threshold
value: 45.55969536304474
- type: cosine_precision
value: 57.028112449799195
- type: cosine_recall
value: 92.81045751633987
- type: dot_accuracy
value: 66.77524429967427
- type: dot_accuracy_threshold
value: 55.589759349823
- type: dot_ap
value: 66.4567219323658
- type: dot_f1
value: 70.64676616915423
- type: dot_f1_threshold
value: 45.55969536304474
- type: dot_precision
value: 57.028112449799195
- type: dot_recall
value: 92.81045751633987
- type: euclidean_accuracy
value: 66.77524429967427
- type: euclidean_accuracy_threshold
value: 94.24455165863037
- type: euclidean_ap
value: 66.4567219323658
- type: euclidean_f1
value: 70.64676616915423
- type: euclidean_f1_threshold
value: 104.34587001800537
- type: euclidean_precision
value: 57.028112449799195
- type: euclidean_recall
value: 92.81045751633987
- type: main_score
value: 66.4567219323658
- type: manhattan_accuracy
value: 66.77524429967427
- type: manhattan_accuracy_threshold
value: 2865.5345916748047
- type: manhattan_ap
value: 66.26659863769075
- type: manhattan_f1
value: 70.8542713567839
- type: manhattan_f1_threshold
value: 3212.3912811279297
- type: manhattan_precision
value: 57.55102040816327
- type: manhattan_recall
value: 92.15686274509804
- type: max_accuracy
value: 66.77524429967427
- type: max_ap
value: 66.4567219323658
- type: max_f1
value: 70.8542713567839
- type: max_precision
value: 57.55102040816327
- type: max_recall
value: 92.81045751633987
- type: similarity_accuracy
value: 66.77524429967427
- type: similarity_accuracy_threshold
value: 55.58975338935852
- type: similarity_ap
value: 66.4567219323658
- type: similarity_f1
value: 70.64676616915423
- type: similarity_f1_threshold
value: 45.55969536304474
- type: similarity_precision
value: 57.028112449799195
- type: similarity_recall
value: 92.81045751633987
task:
type: PairClassification
license: mit
language:
- ru
- en
tags:
- mteb
- transformers
- sentence-transformers
base_model: ai-forever/FRED-T5-1.7B
pipeline_tag: feature-extraction
datasets:
- ai-forever/solyanka
---
# Model Card for FRIDA
<figure>
<img src="img.jpg">
</figure>
FRIDA is a full-scale finetuned general text embedding model inspired by denoising architecture based on T5. The model is based on the encoder part of [FRED-T5](https://arxiv.org/abs/2309.10931) model and continues research of text embedding models ([ruMTEB](https://arxiv.org/abs/2408.12503), [ru-en-RoSBERTa](https://huggingface.co/ai-forever/ru-en-RoSBERTa)). It has been pre-trained on a Russian-English dataset and fine-tuned for improved performance on the target task.
For more model details please refer to our [article](https://habr.com/ru/companies/sberdevices/articles/909924/) (RU).
## Usage
The model can be used as is with prefixes. It is recommended to use CLS pooling. The choice of prefix and pooling depends on the task.
We use the following basic rules to choose a prefix:
- `"search_query: "` and `"search_document: "` prefixes are for answer or relevant paragraph retrieval
- `"paraphrase: "` prefix is for symmetric paraphrasing related tasks (STS, paraphrase mining, deduplication)
- `"categorize: "` prefix is for asymmetric matching of document title and body (e.g. news, scientific papers, social posts)
- `"categorize_sentiment: "` prefix is for any tasks that rely on sentiment features (e.g. hate, toxic, emotion)
- `"categorize_topic: "` prefix is intended for tasks where you need to group texts by topic
- `"categorize_entailment: "` prefix is for textual entailment task (NLI)
To better tailor the model to your needs, you can fine-tune it with relevant high-quality Russian and English datasets.
Below are examples of texts encoding using the Transformers and SentenceTransformers libraries.
### Transformers
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, T5EncoderModel
def pool(hidden_state, mask, pooling_method="cls"):
if pooling_method == "mean":
s = torch.sum(hidden_state * mask.unsqueeze(-1).float(), dim=1)
d = mask.sum(axis=1, keepdim=True).float()
return s / d
elif pooling_method == "cls":
return hidden_state[:, 0]
inputs = [
#
"paraphrase: В Ярославской области разрешили работу бань, но без посетителей",
"categorize_entailment: Женщину доставили в больницу, за ее жизнь сейчас борются врачи.",
"search_query: Сколько программистов нужно, чтобы вкрутить лампочку?",
#
"paraphrase: Ярославским баням разрешили работать без посетителей",
"categorize_entailment: Женщину спасают врачи.",
"search_document: Чтобы вкрутить лампочку, требуется три программиста: один напишет программу извлечения лампочки, другой — вкручивания лампочки, а третий проведет тестирование."
]
tokenizer = AutoTokenizer.from_pretrained("ai-forever/FRIDA")
model = T5EncoderModel.from_pretrained("ai-forever/FRIDA")
tokenized_inputs = tokenizer(inputs, max_length=512, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
outputs = model(**tokenized_inputs)
embeddings = pool(
outputs.last_hidden_state,
tokenized_inputs["attention_mask"],
pooling_method="cls" # or try "mean"
)
embeddings = F.normalize(embeddings, p=2, dim=1)
sim_scores = embeddings[:3] @ embeddings[3:].T
print(sim_scores.diag().tolist())
# [0.9360030293464661, 0.8591322302818298, 0.728583037853241]
```
### SentenceTransformers
```python
from sentence_transformers import SentenceTransformer
inputs = [
#
"paraphrase: В Ярославской области разрешили работу бань, но без посетителей",
"categorize_entailment: Женщину доставили в больницу, за ее жизнь сейчас борются врачи.",
"search_query: Сколько программистов нужно, чтобы вкрутить лампочку?",
#
"paraphrase: Ярославским баням разрешили работать без посетителей",
"categorize_entailment: Женщину спасают врачи.",
"search_document: Чтобы вкрутить лампочку, требуется три программиста: один напишет программу извлечения лампочки, другой — вкручивания лампочки, а третий проведет тестирование."
]
# loads model with CLS pooling
model = SentenceTransformer("ai-forever/FRIDA")
# embeddings are normalized by default
embeddings = model.encode(inputs, convert_to_tensor=True)
sim_scores = embeddings[:3] @ embeddings[3:].T
print(sim_scores.diag().tolist())
# [0.9360026717185974, 0.8591331243515015, 0.7285830974578857]
```
or using prompts (sentence-transformers>=2.4.0):
```python
from sentence_transformers import SentenceTransformer
# loads model with CLS pooling
model = SentenceTransformer("ai-forever/FRIDA")
paraphrase = model.encode(["В Ярославской области разрешили работу бань, но без посетителей", "Ярославским баням разрешили работать без посетителей"], prompt_name="paraphrase")
print(paraphrase[0] @ paraphrase[1].T) # 0.9360032
categorize_entailment = model.encode(["Женщину доставили в больницу, за ее жизнь сейчас борются врачи.", "Женщину спасают врачи."], prompt_name="categorize_entailment")
print(categorize_entailment[0] @ categorize_entailment[1].T) # 0.8591322
query_embedding = model.encode("Сколько программистов нужно, чтобы вкрутить лампочку?", prompt_name="search_query")
document_embedding = model.encode("Чтобы вкрутить лампочку, требуется три программиста: один напишет программу извлечения лампочки, другой — вкручивания лампочки, а третий проведет тестирование.", prompt_name="search_document")
print(query_embedding @ document_embedding.T) # 0.7285831
```
## Authors
+ [SaluteDevices](https://sberdevices.ru/) AI for B2C RnD Team.
+ Artem Snegirev: [HF profile](https://huggingface.co/artemsnegirev), [Github](https://github.com/artemsnegirev);
+ Anna Maksimova [HF profile](https://huggingface.co/anpalmak);
+ Aleksandr Abramov: [HF profile](https://huggingface.co/Andrilko), [Github](https://github.com/Ab1992ao), [Kaggle Competitions Master](https://www.kaggle.com/andrilko)
## Citation
```
@misc{TODO
}
```
## Limitations
The model is designed to process texts in Russian, the quality in English is unknown. Maximum input text length is limited to 512 tokens. |
Hyper-AI-Computer/FlaxLlama-Init-Model-V4 | Hyper-AI-Computer | 2025-05-26T09:04:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T08:53:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
```json
{
"bytes accessed0{}":38364528640.0,
"transcendentals":2114048000.0,
"utilization1{}":583.0,
"bytes accessed1{}":32694747136.0,
"utilization2{}":61.0,
"bytes accessedout{}":38372065280.0,
"utilization0{}":554.0,
"bytes accessed2{}":2025914368.0,
"bytes accessed":95467569152.0,
"flops":3053315162112.0
}
```
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e5 | ahmedelgebaly | 2025-05-26T09:03:35Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-23T13:38:23Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_e5
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e5
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2_SciQ_e5
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e5
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 5
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_e5
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7866 | 0.0305 | 1 | 1.8420 |
| 1.1272 | 0.2443 | 8 | 1.0962 |
| 0.8409 | 0.4885 | 16 | 0.9651 |
| 0.8668 | 0.7328 | 24 | 0.9332 |
| 0.8579 | 0.9771 | 32 | 0.9190 |
| 0.8342 | 1.2137 | 40 | 0.9073 |
| 0.799 | 1.4580 | 48 | 0.9008 |
| 0.8282 | 1.7023 | 56 | 0.8955 |
| 0.8018 | 1.9466 | 64 | 0.8928 |
| 0.8041 | 2.1832 | 72 | 0.8922 |
| 0.8032 | 2.4275 | 80 | 0.8903 |
| 0.7785 | 2.6718 | 88 | 0.8875 |
| 0.7522 | 2.9160 | 96 | 0.8861 |
| 0.7369 | 3.1527 | 104 | 0.8948 |
| 0.7527 | 3.3969 | 112 | 0.8921 |
| 0.7414 | 3.6412 | 120 | 0.8928 |
| 0.7227 | 3.8855 | 128 | 0.8935 |
| 0.7021 | 4.1221 | 136 | 0.8948 |
| 0.7255 | 4.3664 | 144 | 0.8972 |
| 0.7037 | 4.6107 | 152 | 0.8977 |
| 0.7006 | 4.8550 | 160 | 0.8976 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bigband/EnchantingDumuzi | bigband | 2025-05-26T09:02:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T08:53:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
DevQuasar/beetlware.Bee1reason-arabic-Qwen-14B-GGUF | DevQuasar | 2025-05-26T09:01:49Z | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:beetlware/Bee1reason-arabic-Qwen-14B",
"base_model:quantized:beetlware/Bee1reason-arabic-Qwen-14B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-26T06:59:27Z | ---
base_model:
- beetlware/Bee1reason-arabic-Qwen-14B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [beetlware/Bee1reason-arabic-Qwen-14B](https://huggingface.co/beetlware/Bee1reason-arabic-Qwen-14B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
bigband/TranscendentApollo | bigband | 2025-05-26T09:01:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T08:52:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
aejion/AccVideo-WanX-T2V-14B | aejion | 2025-05-26T09:01:03Z | 0 | 3 | diffusers | [
"diffusers",
"safetensors",
"t2v",
"arxiv:2503.19462",
"region:us"
] | null | 2025-05-26T03:12:52Z | # AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset
This repository is the official PyTorch implementation of [AccVideo](https://arxiv.org/abs/2503.19462). AccVideo is a novel efficient distillation method to accelerate video diffusion models with synthetic datset. Our method is 8.5x faster than HunyuanVideo.
[](https://arxiv.org/abs/2503.19462)
[](https://aejion.github.io/accvideo/)
[](https://huggingface.co/aejion/AccVideo)
## 🔥🔥🔥 News
* May 26, 2025: We release the inference code and [model weights](https://huggingface.co/aejion/AccVideo-WanX-T2V-14B) of AccVideo based on WanXT2V-14B.
* Mar 31, 2025: [ComfyUI-Kijai (FP8 Inference)](https://huggingface.co/Kijai/HunyuanVideo_comfy/blob/main/accvideo-t2v-5-steps_fp8_e4m3fn.safetensors): ComfyUI-Integration by [Kijai](https://huggingface.co/Kijai)
* Mar 26, 2025: We release the inference code and [model weights](https://huggingface.co/aejion/AccVideo) of AccVideo based on HunyuanT2V.
## 🎥 Demo (Based on HunyuanT2V)
https://github.com/user-attachments/assets/59f3c5db-d585-4773-8d92-366c1eb040f0
## 🎥 Demo (Based on WanXT2V-14B)
## 📑 Open-source Plan
- [x] Inference
- [x] Checkpoints
- [ ] Multi-GPU Inference
- [ ] Synthetic Video Dataset, SynVid
- [ ] Training
## 🔧 Installation
The code is tested on Python 3.10.0, CUDA 11.8 and A100.
```
conda create -n accvideo python==3.10.0
conda activate accvideo
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
pip install flash-attn==2.7.3 --no-build-isolation
pip install "huggingface_hub[cli]"
```
## 🤗 Checkpoints
To download the checkpoints (based on HunyuanT2V), use the following command:
```bash
# Download the model weight
huggingface-cli download aejion/AccVideo --local-dir ./ckpts
```
To download the checkpoints (based on WanX-T2V-14B), use the following command:
```bash
# Download the model weight
huggingface-cli download aejion/AccVideo-WanX-T2V-14B --local-dir ./wanx_t2v_ckpts
```
## 🚀 Inference
We recommend using a GPU with 80GB of memory. We use AccVideo to distill Hunyuan and WanX.
### Inference for HunyuanT2V
To run the inference, use the following command:
```bash
export MODEL_BASE=./ckpts
python sample_t2v.py \
--height 544 \
--width 960 \
--num_frames 93 \
--num_inference_steps 5 \
--guidance_scale 1 \
--embedded_cfg_scale 6 \
--flow_shift 7 \
--flow-reverse \
--prompt_file ./assets/prompt.txt \
--seed 1024 \
--output_path ./results/accvideo-544p \
--model_path ./ckpts \
--dit-weight ./ckpts/accvideo-t2v-5-steps/diffusion_pytorch_model.pt
```
The following table shows the comparisons on inference time using a single A100 GPU:
| Model | Setting(height/width/frame) | Inference Time(s) |
|:------------:|:---------------------------:|:-----------------:|
| HunyuanVideo | 720px1280px129f | 3234 |
| Ours | 720px1280px129f | 380(8.5x faster) |
| HunyuanVideo | 544px960px93f | 704 |
| Ours | 544px960px93f | 91(7.7x faster) |
### Inference for WanXT2V
To run the inference, use the following command:
```bash
python sample_wanx_t2v.py \
--task t2v-14B \
--size 832*480 \
--ckpt_dir ./wanx_t2v_ckpts \
--sample_solver 'unipc' \
--save_dir ./results/accvideo_wanx_14B \
--sample_steps 10
```
The following table shows the comparisons on inference time using a single A100 GPU:
| Model | Setting(height/width/frame) | Inference Time(s) |
|:-----:|:---------------------------:|:-----------------:|
| Wanx | 480px832px81f | 932 |
| Ours | 480px832px81f | 97(9.6x faster) |
## 🔗 BibTeX
If you find [AccVideo](https://arxiv.org/abs/2503.19462) useful for your research and applications, please cite using this BibTeX:
```BibTeX
@article{zhang2025accvideo,
title={AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset},
author={Zhang, Haiyu and Chen, Xinyuan and Wang, Yaohui and Liu, Xihui and Wang, Yunhong and Qiao, Yu},
journal={arXiv preprint arXiv:2503.19462},
year={2025}
}
```
## Acknowledgements
The code is built upon [FastVideo](https://github.com/hao-ai-lab/FastVideo) and [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), we thank all the contributors for open-sourcing.
|
leobianco/npov_RM_model_google_seed_12345_SYN_LLM_false_SYN_STRUCT_false_epochs_3_lr_1e-4_lora_1 | leobianco | 2025-05-26T09:00:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T08:50:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF | mradermacher | 2025-05-26T09:00:27Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"grpo",
"en",
"base_model:wckwan/Student-rl_Qwen2.5-7B-Instruct_v0.0.1",
"base_model:quantized:wckwan/Student-rl_Qwen2.5-7B-Instruct_v0.0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-26T08:29:34Z | ---
base_model: wckwan/Student-rl_Qwen2.5-7B-Instruct_v0.0.1
language:
- en
library_name: transformers
model_name: Student-rl_Qwen2.5-7B-Instruct_v0.0.1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wckwan/Student-rl_Qwen2.5-7B-Instruct_v0.0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Student-rl_Qwen2.5-7B-Instruct_v0.0.1-GGUF/resolve/main/Student-rl_Qwen2.5-7B-Instruct_v0.0.1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AbderrahmanSkiredj1/GemMaroc-27b-it-GGUF | AbderrahmanSkiredj1 | 2025-05-26T08:58:52Z | 124 | 0 | transformers | [
"transformers",
"gguf",
"Moroccan",
"Darija",
"GemMaroc",
"GGUF",
"ary",
"en",
"ar",
"dataset:GemMaroc/TULU-3-50k-darija-english",
"arxiv:2505.17082",
"base_model:AbderrahmanSkiredj1/GemMaroc-27b-it",
"base_model:quantized:AbderrahmanSkiredj1/GemMaroc-27b-it",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-22T17:47:29Z | ---
base_model: AbderrahmanSkiredj1/GemMaroc-27b-it
language:
- ary
- en
- ar
library_name: transformers
quantized_by: mradermacher
datasets:
- GemMaroc/TULU-3-50k-darija-english
tags:
- Moroccan
- Darija
- GemMaroc
- GGUF
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AbderrahmanSkiredj1/GemMaroc-27b-it
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q2_K.gguf) | Q2_K | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q3_K_S.gguf) | Q3_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q3_K_L.gguf) | Q3_K_L | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.IQ4_XS.gguf) | IQ4_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q5_K_S.gguf) | Q5_K_S | 18.9 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q5_K_M.gguf) | Q5_K_M | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q6_K.gguf) | Q6_K | 22.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GemMaroc-27b-it-GGUF/resolve/main/GemMaroc-27b-it.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
---
# GemMaroc‑27B
Unlocking **Moroccan Darija** proficiency in a state‑of‑the‑art large language model, trained with a *minimal‑data, green‑AI* recipe that preserves Gemma‑27B’s strong reasoning abilities while adding fluent Darija generation.
---
## Model at a glance
| | Details |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| **Model ID** | `AbderrahmanSkiredj1/GemMaroc-27b-it` |
| **Base model** | [`google/gemma-3-27b`](https://huggingface.co/google/gemma-3-27b) |
| **Architecture** | Decoder‑only Transformer (Gemma 3) |
| **Parameters** | 27 billion |
| **Context length** | 2 048 tokens |
| **Training regime** | Supervised fine‑tuning (LoRA → merged) on 50 K high‑quality Darija/English instructions TULU‑50K slice |
| **Compute budget** | 48 GPU·h (8 × H100‑80GB × 6 h) – ≈ 26 kWh / 10 kg CO₂e |
| **License** | Apache 2.0 |
---
## Why another Darija model?
* **Inclusive AI** > 36 million speakers of Moroccan Arabic remain underserved by open LLMs.
* **Quality‑over‑quantity** A carefully curated 50 K instruction set surfaces Darija competence without sacrificing cross‑lingual reasoning.
* **Green AI** GemMaroc achieves Atlas‑Chat‑level Darija scores using < 2 % of the energy.
---
## Benchmark summary
| Model | Darija MMLU | Darija HellaSwag | GSM8K @5 | HellaSwag (EN) |
| ---------------- | ----------- | ---------------- | ---------- | -------------- |
| Atlas‑Chat‑27B | **61.9 %** | 48.4 % | 82.0 % | 77.8 % |
| **GemMaroc‑27B** | 61.6 % | **60.5 %** | **84.2 %** | **79.3 %** |
<sub>Zero‑shot accuracy; full table in the paper.</sub>
---
## Quick start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "AbderrahmanSkiredj1/GemMaroc-27b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto"
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
temperature=0.7,
repetition_penalty=1.2,
no_repeat_ngram_size=3,
)
messages = [
{"role": "user", "content": "شنو هي نظرية ‘butterfly effect’؟ فسّرها بدارجة ونقّط مثال بسيط."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
print(pipe(prompt)[0]["generated_text"][len(prompt):])
```
### Chat template (Gemma 3 format)
The tokenizer provides a baked‑in Jinja template that starts with a **begin‑of‑sequence** token (`<bos>`), then alternates user/model turns, each wrapped by `<start_of_turn>` … `<end_of_turn>` markers. When you set `add_generation_prompt=True` it ends after the opening model tag so the model can continue:
```
<bos><start_of_turn>user
{user message}<end_of_turn>
<start_of_turn>model
```
The assistant will keep generating tokens until it decides to emit `<end_of_turn>`.
```python
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
```
No manual token juggling required—the call above handles BOS, turn delimiters, and newline placement automatically.
---
Pre‑quantised checkpoints will be published under the same repo tags (`gemmaroc‑27b‑awq‑int4`, `gemmaroc‑27b‑gguf‑q4_k_m`).
---
## Training recipe (one‑paragraph recap)
1. **Data** Translate a 44 K reasoning slice of TULU 50K into Darija, keeping 20 % English for cross‑lingual robustness.
2. **LoRA SFT** Rank 16, α = 32, 3 epochs, bf16, context 2 048.
3. **Merge & push** Merge LoRA into base weights (`peft.merge_and_unload`), convert to safetensors, upload.
---
## Limitations & ethical considerations
* Sentiment and abstractive summarisation still trail state‑of‑the‑art.
* Tokeniser is unchanged; rare Darija spellings may fragment.
* Model may inherit societal biases present in pre‑training data.
* No RLHF / RLAIF safety alignment yet – apply a moderation layer in production.
---
## Citation
If you use GemMaroc in your work, please cite:
```bibtex
@misc{skiredj2025gemmarocunlockingdarijaproficiency,
title={GemMaroc: Unlocking Darija Proficiency in LLMs with Minimal Data},
author={Abderrahman Skiredj and Ferdaous Azhari and Houdaifa Atou and Nouamane Tazi and Ismail Berrada},
year={2025},
eprint={2505.17082},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.17082},
}
```
<!-- end --> |
GemMaroc/GemMaroc-4b-tulu | GemMaroc | 2025-05-26T08:57:18Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"MoroccanArabic",
"Darija",
"GemMaroc",
"conversational",
"ar",
"ary",
"en",
"dataset:GemMaroc/TULU-3-50k-darija-english",
"arxiv:2505.17082",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-20T00:18:46Z | ---
library_name: transformers
tags:
- MoroccanArabic
- Darija
- GemMaroc
datasets:
- GemMaroc/TULU-3-50k-darija-english
language:
- ar
- ary
- en
base_model:
- google/gemma-3-27b-it
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# GemMaroc‑27B
Unlocking **Moroccan Darija** proficiency in a state‑of‑the‑art large language model, trained with a *minimal‑data, green‑AI* recipe that preserves Gemma‑27B’s strong reasoning abilities while adding fluent Darija generation.
---
## Model at a glance
| | Details |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| **Model ID** | `AbderrahmanSkiredj1/GemMaroc-27b-it` |
| **Base model** | [`google/gemma-3-27b`](https://huggingface.co/google/gemma-3-27b) |
| **Architecture** | Decoder‑only Transformer (Gemma 3) |
| **Parameters** | 27 billion |
| **Context length** | 2 048 tokens |
| **Training regime** | Supervised fine‑tuning (LoRA → merged) on 50 K high‑quality Darija/English instructions TULU‑50K slice |
| **Compute budget** | 48 GPU·h (8 × H100‑80GB × 6 h) – ≈ 26 kWh / 10 kg CO₂e |
| **License** | Apache 2.0 |
---
## Why another Darija model?
* **Inclusive AI** > 36 million speakers of Moroccan Arabic remain underserved by open LLMs.
* **Quality‑over‑quantity** A carefully curated 50 K instruction set surfaces Darija competence without sacrificing cross‑lingual reasoning.
* **Green AI** GemMaroc achieves Atlas‑Chat‑level Darija scores using < 2 % of the energy.
---
## Benchmark summary
| Model | Darija MMLU | Darija HellaSwag | GSM8K @5 | HellaSwag (EN) |
| ---------------- | ----------- | ---------------- | ---------- | -------------- |
| Atlas‑Chat‑27B | **61.9 %** | 48.4 % | 82.0 % | 77.8 % |
| **GemMaroc‑27B** | 61.6 % | **60.5 %** | **84.2 %** | **79.3 %** |
<sub>Zero‑shot accuracy; full table in the paper.</sub>
---
## Quick start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "AbderrahmanSkiredj1/GemMaroc-27b-it"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto"
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
temperature=0.7,
repetition_penalty=1.2,
no_repeat_ngram_size=3,
)
messages = [
{"role": "user", "content": "شنو هي نظرية ‘butterfly effect’؟ فسّرها بدارجة ونقّط مثال بسيط."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
print(pipe(prompt)[0]["generated_text"][len(prompt):])
```
### Chat template (Gemma 3 format)
The tokenizer provides a baked‑in Jinja template that starts with a **begin‑of‑sequence** token (`<bos>`), then alternates user/model turns, each wrapped by `<start_of_turn>` … `<end_of_turn>` markers. When you set `add_generation_prompt=True` it ends after the opening model tag so the model can continue:
```
<bos><start_of_turn>user
{user message}<end_of_turn>
<start_of_turn>model
```
The assistant will keep generating tokens until it decides to emit `<end_of_turn>`.
```python
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
```
No manual token juggling required—the call above handles BOS, turn delimiters, and newline placement automatically.
---
Pre‑quantised checkpoints will be published under the same repo tags (`gemmaroc‑27b‑awq‑int4`, `gemmaroc‑27b‑gguf‑q4_k_m`).
---
## Training recipe (one‑paragraph recap)
1. **Data** Translate a 44 K reasoning slice of TULU 50K into Darija, keeping 20 % English for cross‑lingual robustness.
2. **LoRA SFT** Rank 16, α = 32, 3 epochs, bf16, context 2 048.
3. **Merge & push** Merge LoRA into base weights (`peft.merge_and_unload`), convert to safetensors, upload.
---
## Limitations & ethical considerations
* Sentiment and abstractive summarisation still trail state‑of‑the‑art.
* Tokeniser is unchanged; rare Darija spellings may fragment.
* Model may inherit societal biases present in pre‑training data.
* No RLHF / RLAIF safety alignment yet – apply a moderation layer in production.
---
## Citation
If you use GemMaroc in your work, please cite:
```bibtex
@misc{skiredj2025gemmarocunlockingdarijaproficiency,
title={GemMaroc: Unlocking Darija Proficiency in LLMs with Minimal Data},
author={Abderrahman Skiredj and Ferdaous Azhari and Houdaifa Atou and Nouamane Tazi and Ismail Berrada},
year={2025},
eprint={2505.17082},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.17082},
}
```
|
Aluba/zombie2505_26 | Aluba | 2025-05-26T08:56:49Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-26T08:29:54Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_E2_V2 | ahmedelgebaly | 2025-05-26T08:54:06Z | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-09T15:16:11Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_E2_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B
lora_model_dir: ahmedelgebaly/llama-3.1-8b-squadv2_E1_V2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
- path: ahmedelgebaly/SQuad_2_Alpaca
type: alpaca
split: train
percentage: 0.1 # small replay buffer to avoid forgetting
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 64 #Before it was 16
lora_dropout: 0.05
lora_target_modules: #Before it was empty
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e2_v2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e2_v2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_E2_V2
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: true #Before it was false
bf16: auto
tf32: false
gradient_checkpointing: true
flash_attention: true
warmup_steps: 50 #Before it was 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_E2_V2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0029 | 1 | 2.2993 |
| 0.8102 | 0.2504 | 85 | 0.9110 |
| 0.8141 | 0.5007 | 170 | 0.8933 |
| 0.8189 | 0.7511 | 255 | 0.8846 |
| 0.8188 | 1.0015 | 340 | 0.8763 |
| 0.6354 | 1.2496 | 425 | 0.9022 |
| 0.6568 | 1.5 | 510 | 0.9029 |
| 0.639 | 1.7504 | 595 | 0.8990 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Manucn10/kaggle-v2 | Manucn10 | 2025-05-26T08:53:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T08:52:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pwde/tuiche-ceshi | pwde | 2025-05-26T08:52:54Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-26T08:01:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Pretrain-FBK-NLP/mt5-large_AllDataSourcesClinical_0.0002_constant_1024_paper | Pretrain-FBK-NLP | 2025-05-26T08:51:22Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-05-13T23:13:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1_v2 | ahmedelgebaly | 2025-05-26T08:51:09Z | 19 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-26T17:42:50Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_e1_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 2048 # Halves memory usage decreasing from 4096
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
lora_r: 64 # Increased from 32
lora_alpha: 32 # Increased from 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e1_v2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e1_v2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1_v2
gradient_accumulation_steps: 32 # Keeps effective batch size=64 (2x32)
micro_batch_size: 2 # Decrreses from 4
num_epochs: 1
optimizer: paged_adamw_32bit
lr_scheduler: cosine_with_restarts # Updated
learning_rate: 0.0001 # Reduced from 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100 # Increased from 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_e1_v2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8006 | 0.0598 | 1 | 1.8330 |
| 1.7825 | 0.2393 | 4 | 1.8315 |
| 1.7629 | 0.4785 | 8 | 1.8140 |
| 1.6663 | 0.7178 | 12 | 1.7312 |
| 1.5168 | 0.9570 | 16 | 1.5100 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Mihaj/whisper-medium-karelian-cs-w-rus | Mihaj | 2025-05-26T08:49:41Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-20T09:38:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
boltuix/NeuroFeel | boltuix | 2025-05-26T08:48:15Z | 95 | 4 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"emotion",
"classification",
"neurobert",
"emojis",
"emotions",
"v1.0",
"sentiment-analysis",
"nlp",
"lightweight",
"chatbot",
"social-media",
"mental-health",
"short-text",
"emotion-detection",
"real-time",
"expressive",
"ai",
"machine-learning",
"english",
"inference",
"edge-ai",
"smart-replies",
"tone-analysis",
"contextual-ai",
"wearable-ai",
"en",
"dataset:custom",
"dataset:chatgpt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-24T11:33:55Z | ---
license: apache-2.0
language:
- en
metrics:
- precision
- recall
- f1
- accuracy
new_version: v1.0
datasets:
- custom
- chatgpt
pipeline_tag: text-classification
library_name: transformers
tags:
- emotion
- classification
- text-classification
- neurobert
- emojis
- emotions
- v1.0
- sentiment-analysis
- nlp
- lightweight
- chatbot
- social-media
- mental-health
- short-text
- emotion-detection
- transformers
- real-time
- expressive
- ai
- machine-learning
- english
- inference
- edge-ai
- smart-replies
- tone-analysis
- contextual-ai
- wearable-ai
base_model:
- neurobert
---
.jpg)
# 😊 NeuroFeel — Lightweight NeuroBERT for Real-Time Emotion Detection 🌟
[](https://www.apache.org/licenses/LICENSE-2.0)
[](#)
[](#)
[](#)
## Table of Contents
- 📖 [Overview](#overview)
- ✨ [Key Features](#key-features)
- 💫 [Supported Emotions](#supported-emotions)
- 🧠 [Model Architecture](#model-architecture)
- ⚙️ [Installation](#installation)
- 📥 [Download Instructions](#download-instructions)
- 🚀 [Quickstart: Emotion Detection](#quickstart-emotion-detection)
- 💡 [Use Cases](#use-cases)
- 🖥️ [Hardware Requirements](#hardware-requirements)
- 📚 [Training Details](#training-details)
- 🔧 [Fine-Tuning Guide](#fine-tuning-guide)
- ⚖️ [Comparison to Other Models](#comparison-to-other-models)
- 🏷️ [Tags](#tags)
- 📄 [License](#license)
- 🙏 [Credits](#credits)
- 💬 [Support & Community](#support--community)
- ✍️ [Contact](#contact)
## 🚀 Model Training Tutorial Video
Watch this **step-by-step guide** to train your machine learning model! 🎥
[](https://www.youtube.com/watch?v=FccGKE1kV4Q)
*Click the image above to watch the tutorial!*
## Overview
`NeuroFeel` is a **lightweight** NLP model built on **NeuroBERT**, fine-tuned for **short-text emotion detection** on **edge and IoT devices**. With a quantized size of **~25MB** and **~7M parameters**, it classifies text into **13 nuanced emotional categories** (e.g., Happiness, Sadness, Anger, Love) with high precision. Optimized for **low-latency** and **offline operation**, NeuroFeel is perfect for privacy-focused applications like chatbots, social media sentiment analysis, mental health monitoring, and contextual AI in resource-constrained environments such as wearables, smart home devices, and mobile apps.
- **Model Name**: NeuroFeel
- **Size**: ~25MB (quantized)
- **Parameters**: ~7M
- **Architecture**: Lightweight NeuroBERT (4 layers, hidden size 256, 8 attention heads)
- **Description**: Compact 4-layer, 256-hidden model for emotion detection
- **License**: Apache-2.0 — free for commercial and personal use
## Key Features
- ⚡ **Ultra-Compact Design**: ~25MB footprint for devices with limited storage.
- 🧠 **Rich Emotion Detection**: Classifies 13 emotions with expressive emoji mappings.
- 📶 **Offline Capability**: Fully functional without internet connectivity.
- ⚙️ **Real-Time Inference**: Optimized for CPUs, mobile NPUs, and microcontrollers.
- 🌍 **Versatile Applications**: Supports emotion detection, sentiment analysis, and tone analysis for short texts.
- 🔒 **Privacy-First**: On-device processing ensures user data stays local.
## Supported Emotions
NeuroFeel classifies text into one of 13 emotional categories, each paired with an emoji for enhanced interpretability:
| Emotion | Emoji |
|------------|-------|
| Sadness | 😢 |
| Anger | 😠 |
| Love | ❤️ |
| Surprise | 😲 |
| Fear | 😱 |
| Happiness | 😄 |
| Neutral | 😐 |
| Disgust | 🤢 |
| Shame | 🙈 |
| Guilt | 😔 |
| Confusion | 😕 |
| Desire | 🔥 |
| Sarcasm | 😏 |
## Model Architecture
NeuroFeel is derived from **NeuroBERT**, a lightweight transformer model optimized for edge computing. Key architectural details:
- **Layers**: 4 transformer layers for reduced computational complexity.
- **Hidden Size**: 256, balancing expressiveness and efficiency.
- **Attention Heads**: 8, enabling robust contextual understanding.
- **Parameters**: ~7M, significantly fewer than standard BERT models.
- **Quantization**: INT8 quantization for minimal memory usage and fast inference.
- **Vocabulary Size**: 30,522 tokens, compatible with NeuroBERT’s tokenizer.
- **Max Sequence Length**: 64 tokens, ideal for short-text inputs like social media posts or chatbot messages.
This architecture ensures NeuroFeel delivers high accuracy for emotion detection while maintaining compatibility with resource-constrained devices like Raspberry Pi, ESP32, or mobile NPUs.
## Installation
Install the required dependencies:
```bash
pip install transformers torch
```
Ensure your environment supports Python 3.6+ and has ~25MB of storage for model weights.
## Download Instructions
1. **Via Hugging Face**:
- Access the model at [boltuix/NeuroFeel](https://huggingface.co/boltuix/NeuroFeel).
- Download the model files (~25MB) or clone the repository:
```bash
git clone https://huggingface.co/boltuix/NeuroFeel
```
2. **Via Transformers Library**:
- Load the model directly in Python:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("boltuix/NeuroFeel")
tokenizer = AutoTokenizer.from_pretrained("boltuix/NeuroFeel")
```
3. **Manual Download**:
- Download quantized model weights (Safetensors format) from the Hugging Face model hub.
- Extract and integrate into your edge/IoT application.
4. **Dataset Download**:

# 🌟 Emotions Dataset — Infuse Your AI with Human Feelings! 😊😢😡
**[Start Exploring Dataset](https://huggingface.co/datasets/boltuix/emotions-dataset)** 🚀
## Quickstart: Emotion Detection
### Basic Inference Example
Classify emotions in short text inputs using the Hugging Face pipeline:
```python
from transformers import pipeline
# Load the fine-tuned NeuroFeel model
sentiment_analysis = pipeline("text-classification", model="boltuix/NeuroFeel")
# Analyze emotion
result = sentiment_analysis("i love you")
print(result)
```
**Output**:
```python
[{'label': 'Love', 'score': 0.8563215732574463}]
```
This indicates the emotion is **Love ❤️** with **85.63%** confidence.
### Extended Example with Emoji Mapping
Enhance the output with human-readable emotions and emojis:
```python
from transformers import pipeline
# Load the fine-tuned NeuroFeel model
sentiment_analysis = pipeline("text-classification", model="boltuix/NeuroFeel")
# Define label-to-emoji mapping
label_to_emoji = {
"Sadness": "😢",
"Anger": "😠",
"Love": "❤️",
"Surprise": "😲",
"Fear": "😱",
"Happiness": "😄",
"Neutral": "😐",
"Disgust": "🤢",
"Shame": "🙈",
"Guilt": "😔",
"Confusion": "😕",
"Desire": "🔥",
"Sarcasm": "😏"
}
# Input text
text = "i love you"
# Analyze emotion
result = sentiment_analysis(text)[0]
label = result["label"].capitalize()
emoji = label_to_emoji.get(label, "❓")
# Output
print(f"Text: {text}")
print(f"Predicted Emotion: {label} {emoji}")
print(f"Confidence: {result['score']:.2%}")
```
**Output**:
```plaintext
Text: i love you
Predicted Emotion: Love ❤️
Confidence: 85.63%
```
*Note*: Fine-tune the model for domain-specific tasks to boost accuracy.
NeuroFeel excels in classifying a wide range of emotions in short texts, particularly in IoT, social media, and mental health contexts. Fine-tuning enhances performance on subtle emotions like Sarcasm or Shame.
### Evaluation Metrics
| Metric | Value (Approx.) |
|------------|-----------------------|
| ✅ Accuracy | ~92–96% on 13-class emotion tasks |
| 🎯 F1 Score | Balanced for multi-class classification |
| ⚡ Latency | <40ms on Raspberry Pi 4 |
| 📏 Recall | Competitive for lightweight models |
*Note*: Metrics depend on hardware and fine-tuning. Test on your target device for precise results.
## Use Cases
NeuroFeel is tailored for **edge and IoT scenarios** requiring real-time emotion detection for short texts. Key applications include:
- **Chatbot Emotion Understanding**: Detect user emotions, e.g., “I love you” (predicts “Love ❤️”) to tailor responses.
- **Social Media Sentiment Tagging**: Analyze posts, e.g., “This is disgusting!” (predicts “Disgust 🤢”) for moderation or trend analysis.
- **Mental Health Context Detection**: Monitor mood, e.g., “I feel so alone” (predicts “Sadness 😢”) for wellness apps or crisis alerts.
- **Smart Replies and Reactions**: Suggest replies, e.g., “I’m so happy!” (predicts “Happiness 😄”) for positive emojis or animations.
- **Emotional Tone Analysis**: Adjust IoT settings, e.g., “I’m terrified!” (predicts “Fear 😱”) to dim lights or play calming music.
- **Voice Assistants**: Local emotion-aware parsing, e.g., “Why does it break?” (predicts “Anger 😠”) to prioritize fixes.
- **Toy Robotics**: Emotion-driven interactions, e.g., “I really want that!” (predicts “Desire 🔥”) for engaging animations.
- **Fitness Trackers**: Analyze feedback, e.g., “Wait, what?” (predicts “Confusion 😕”) to clarify instructions.
- **Wearable Devices**: Real-time mood tracking, e.g., “I’m stressed out” (predicts “Fear 😱”) to suggest breathing exercises.
- **Smart Home Automation**: Contextual responses, e.g., “I’m so tired” (predicts “Sadness 😢”) to adjust lighting or music.
- **Customer Support Bots**: Detect frustration, e.g., “This is ridiculous!” (predicts “Anger 😠”) to escalate to human agents.
- **Educational Tools**: Analyze student feedback, e.g., “I don’t get it” (predicts “Confusion 😕”) to offer tailored explanations.
## Hardware Requirements
- **Processors**: CPUs, mobile NPUs, or microcontrollers (e.g., ESP32-S3, Raspberry Pi 4, Snapdragon NPUs)
- **Storage**: ~25MB for model weights (quantized, Safetensors format)
- **Memory**: ~70MB RAM for inference
- **Environment**: Offline or low-connectivity settings
Quantization ensures efficient memory usage, making NeuroFeel ideal for resource-constrained devices.
## Training Details
NeuroFeel was fine-tuned on a **custom emotion dataset** augmented with **ChatGPT-generated data** to enhance diversity and robustness. Key training details:
- **Dataset**:
- **Custom Emotion Dataset**: ~10,000 labeled short-text samples covering 13 emotions (e.g., Happiness, Sadness, Love). Sourced from social media posts, IoT user feedback, and chatbot interactions.
- **ChatGPT-Augmented Data**: Synthetic samples generated to balance underrepresented emotions (e.g., Sarcasm, Shame) and improve generalization.
- **Preprocessing**: Lowercasing, emoji removal, and tokenization with NeuroBERT’s tokenizer (max length: 64 tokens).
- **Training Process**:
- **Base Model**: NeuroBERT, pre-trained on general English text for masked language modeling.
- **Fine-Tuning**: Supervised training for 13-class emotion classification using cross-entropy loss.
- **Hyperparameters**:
- Epochs: 5
- Batch Size: 16
- Learning Rate: 2e-5
- Optimizer: AdamW
- Scheduler: Linear warmup (10% of steps)
- **Hardware**: Fine-tuned on a single NVIDIA A100 GPU, but inference optimized for edge devices.
- **Quantization**: Post-training INT8 quantization to reduce model size to ~25MB and improve inference speed.
- **Data Augmentation**:
- Synonym replacement and back-translation to enhance robustness.
- Synthetic negative sampling to improve detection of nuanced emotions like Guilt or Confusion.
- **Validation**:
- Split: 80% train, 10% validation, 10% test.
- Validation F1 score: ~0.93 across 13 classes.
Fine-tuning on domain-specific data is recommended to optimize performance for specific use cases (e.g., mental health apps or smart home devices).
## Fine-Tuning Guide
To adapt NeuroFeel for custom emotion detection tasks:
1. **Prepare Dataset**: Collect labeled data with 13 emotion categories.
2. **Fine-Tune with Hugging Face**:
```python
import pandas as pd
from transformers import BertTokenizer, BertForSequenceClassification, Trainer, TrainingArguments
from sklearn.model_selection import train_test_split
import torch
from torch.utils.data import Dataset
# === 1. Load and preprocess data ===
dataset_path = '/content/dataset.csv'
df = pd.read_csv(dataset_path)
# Use the correct original column name 'Label' in dropna
df = df.dropna(subset=['Label']) # Ensure no missing labels
df.columns = ['text', 'label'] # Normalize column names
# === 2. Encode labels ===
labels = sorted(df["label"].unique())
label_to_id = {label: idx for idx, label in enumerate(labels)}
id_to_label = {idx: label for label, idx in label_to_id.items()}
df['label'] = df['label'].map(label_to_id)
# === 3. Train/val split ===
train_texts, val_texts, train_labels, val_labels = train_test_split(
df['text'].tolist(), df['label'].tolist(), test_size=0.2, random_state=42
)
# === 4. Tokenizer ===
tokenizer = BertTokenizer.from_pretrained("boltuix/NeuroBERT-Pro")
# === 5. Dataset class ===
class SentimentDataset(Dataset):
def __init__(self, texts, labels, tokenizer, max_length=128):
self.texts = texts
self.labels = labels
self.tokenizer = tokenizer
self.max_length = max_length
def __len__(self):
return len(self.texts)
def __getitem__(self, idx):
encoding = self.tokenizer(
self.texts[idx],
padding='max_length',
truncation=True,
max_length=self.max_length,
return_tensors='pt'
)
return {
'input_ids': encoding['input_ids'].squeeze(0),
'attention_mask': encoding['attention_mask'].squeeze(0),
'labels': torch.tensor(self.labels[idx], dtype=torch.long)
}
# === 6. Load datasets ===
train_dataset = SentimentDataset(train_texts, train_labels, tokenizer)
val_dataset = SentimentDataset(val_texts, val_labels, tokenizer)
# === 7. Load model ===
model = BertForSequenceClassification.from_pretrained(
"boltuix/NeuroBERT-Pro",
num_labels=len(label_to_id)
)
# Optional: Ensure tensor layout is contiguous
for param in model.parameters():
param.data = param.data.contiguous()
# === 8. Training arguments ===
training_args = TrainingArguments(
output_dir='./results',
run_name="NeuroFeel",
num_train_epochs=5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=10,
eval_strategy="epoch",
report_to="none"
)
# === 9. Trainer setup ===
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
# === 10. Train and evaluate ===
trainer.train()
trainer.evaluate()
# === 11. Save model and label mappings ===
model.config.label2id = label_to_id
model.config.id2label = id_to_label
model.config.num_labels = len(label_to_id)
model.save_pretrained("./neuro-feel")
tokenizer.save_pretrained("./neuro-feel")
print("✅ Training complete. Model and tokenizer saved to ./neuro-feel")
```
3. **Deploy**: Export to ONNX or TensorFlow Lite for edge devices.
## Comparison to Other Models
| Model | Parameters | Size | Edge/IoT Focus | Tasks Supported |
|-----------------|------------|--------|----------------|-------------------------------------|
| NeuroFeel | ~7M | ~25MB | High | Emotion Detection, Classification |
| NeuroBERT | ~7M | ~30MB | High | MLM, NER, Classification |
| BERT-Lite | ~2M | ~10MB | High | MLM, NER, Classification |
| DistilBERT | ~66M | ~200MB | Moderate | MLM, NER, Classification, Sentiment |
NeuroFeel is specialized for 13-class emotion detection, offering superior performance for short-text sentiment analysis on edge devices compared to general-purpose models like NeuroBERT, while being far more efficient than DistilBERT.
## Tags
`#NeuroFeel` `#edge-nlp` `#emotion-detection` `#on-device-ai` `#offline-nlp`
`#mobile-ai` `#sentiment-analysis` `#text-classification` `#emojis` `#emotions`
`#lightweight-transformers` `#embedded-nlp` `#smart-device-ai` `#low-latency-models`
`#ai-for-iot` `#efficient-neurobert` `#nlp2025` `#context-aware` `#edge-ml`
`#smart-home-ai` `#emotion-aware` `#voice-ai` `#eco-ai` `#chatbot` `#social-media`
`#mental-health` `#short-text` `#smart-replies` `#tone-analysis` `#wearable-ai`
## License
**Apache-2.0 License**: Free to use, modify, and distribute for personal and commercial purposes. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details.
## Credits
- **Base Model**: [neurobert](https://huggingface.co/neurobert)
- **Optimized By**: Boltuix, fine-tuned and quantized for edge AI applications
- **Library**: Hugging Face `transformers` team for model hosting and tools
## Support & Community
For issues, questions, or contributions:
- Visit the [Hugging Face model page](https://huggingface.co/boltuix/NeuroFeel)
- Open an issue on the [repository](https://huggingface.co/boltuix/NeuroFeel)
- Join discussions on Hugging Face or contribute via pull requests
- Check the [Transformers documentation](https://huggingface.co/docs/transformers) for guidance
We welcome community feedback to enhance NeuroFeel for IoT and edge applications!
## Contact
- 📬 Email: [[email protected]](mailto:[email protected]) |
phospho-app/jmota27-ACT-boat_cup_dataset-x65e4 | phospho-app | 2025-05-26T08:48:06Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-05-26T06:27:35Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [jmota27/boat_cup_dataset](https://huggingface.co/datasets/jmota27/boat_cup_dataset)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
alongwith/ppo-Huggy | alongwith | 2025-05-26T08:47:56Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-05-26T08:47:50Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: alongwith/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1 | ahmedelgebaly | 2025-05-26T08:47:48Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-25T13:20:17Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e1
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e1
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 1
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_e1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7866 | 0.0305 | 1 | 1.8420 |
| 1.1313 | 0.2443 | 8 | 1.0968 |
| 0.841 | 0.4885 | 16 | 0.9655 |
| 0.8722 | 0.7328 | 24 | 0.9415 |
| 0.8736 | 0.9771 | 32 | 0.9369 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
reels9/Llama-4-Scout-17B-16E-Instruct-Medical-ChatBot | reels9 | 2025-05-26T08:46:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T08:46:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wuxia196/Reinforce-CartPole-v1 | wuxia196 | 2025-05-26T08:42:42Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-26T08:42:33Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bigband/ProteanEreshkigal | bigband | 2025-05-26T08:42:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T08:32:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
StrangeSX/NNN-BNFT-64-0035-v4_fnec | StrangeSX | 2025-05-26T08:42:08Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-05-22T08:06:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FormlessAI/cf2da658-1b17-4700-b77f-d3e98017d67c | FormlessAI | 2025-05-26T08:41:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openlm-research/open_llama_3b",
"base_model:finetune:openlm-research/open_llama_3b",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T07:20:10Z | ---
base_model: openlm-research/open_llama_3b
library_name: transformers
model_name: cf2da658-1b17-4700-b77f-d3e98017d67c
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for cf2da658-1b17-4700-b77f-d3e98017d67c
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/cf2da658-1b17-4700-b77f-d3e98017d67c", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/tq3iv1h1)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
serag-ai/Finetuned_DDI_Gemma | serag-ai | 2025-05-26T08:41:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T14:10:35Z | ---
library_name: transformers
tags:
- unsloth
--- |
green19d25y/Qwen2-32m-hf | green19d25y | 2025-05-26T06:28:07Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"text-generation",
"en",
"license:mit",
"region:us"
] | text-generation | 2025-05-26T06:10:38Z | ---
license: mit
language:
- en
pipeline_tag: text-generation
---
# Qwen2 HF model (32M Parameters)
This is a **Qwen2 architecture model** trained **completely from scratch** with **32 million parameters**. It uses a custom tokenizer and vocabulary, and is designed for experimentation with compact, task-specific language models.
## Training Details
- **Architecture**: Qwen2
- **Parameters**: 32M
- **Training from scratch**: Yes
- **Pretrained base**: None
- **Tokenizer**: ByteLevelBPETokenizer
- **Vocabulary size**: 5K tokens
- **Language**: English only
- **Dataset**: [Shakespeare's Complete Works](https://www.gutenberg.org/ebooks/100)
## Purpose
To check if the Qwen2 works well with small amount of data. It somewhat works, but I believe I need to fine-tune it and perform additional steps to make it more accurate.
## Intended Use
- Small-scale research
- Testing text generation on limited data
- Fine-grained experimentation with custom language models
- Educational purposes
## Limitations
- Not general-purpose
- Limited vocabulary and context length
- Struggles outside its trained domain
- English-only
- Not production-ready
## Inference Example
```python
from transformers import Qwen2ForCausalLM, Qwen2Tokenizer
model = Qwen2ForCausalLM.from_pretrained("green19d25y/Qwen2-32m-hf")
tokenizer = Qwen2Tokenizer.from_pretrained("green19d25y/Qwen2-32m-hf")
prompt = "He had need mean better than his"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(
input_ids,
max_length=100,
num_return_sequences=1,
do_sample=True,
temperature=0.7
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
``` |
btly/koup | btly | 2025-05-26T06:27:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T06:20:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
soob3123/GrayLine-Qwen3-14B-Planner | soob3123 | 2025-05-26T06:23:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-26T06:23:10Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** soob3123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bralynn/think0 | bralynn | 2025-05-26T06:23:07Z | 14 | 0 | null | [
"pytorch",
"llama",
"unsloth",
"trl",
"sft",
"region:us"
] | null | 2025-05-23T04:37:55Z | ---
tags:
- unsloth
- trl
- sft
---
Model designed to be creative ask it what if questions like what if trump got super powers? |
Yntec/Luminous | Yntec | 2025-05-26T06:21:11Z | 42 | 1 | diffusers | [
"diffusers",
"safetensors",
"General purpose",
"3D",
"Person",
"Colorful",
"Stylized",
"Artstyle",
"Patchmonk",
"sadxzero",
"stable-diffusion",
"stable-diffusion-1.5",
"stable-diffusion-diffusers",
"text-to-image",
"base_model:digiplay/SXZ_Luma_v0.98VAE",
"base_model:finetune:digiplay/SXZ_Luma_v0.98VAE",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-05-06T06:22:41Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General purpose
- 3D
- Person
- Colorful
- Stylized
- Artstyle
- Patchmonk
- sadxzero
- stable-diffusion
- stable-diffusion-1.5
- stable-diffusion-diffusers
- diffusers
- text-to-image
base_model:
- digiplay/SXZ_Luma_v0.98VAE
---
# Luminous
LusciousMix V2.5 merged with the SXZ Luma 0.98 model to maximize their creativity! Samples and prompts (all use seed 9119):

(masterpiece), best quality, high resolution, highly detailed, detailed background, perfect lighting, outdoor, 1girl, petite, short hair, pink hair, blunt bangs, t-shirt, short skirt

photo of an extremely beautiful young girl with blonde hair, ultra realistic blue eyes by annie leibovitz, sundress. hyperdetailed digital concept art trending in pinterest Artstation WLOP 3 point lighting cinematic highlights stunning quality 8k oil on canvas shaded flat illustration for fashion photoshoot

cute shot of redhead pirate young girl, long green coat, sea, storm, dark atmosphere, volumetric lighting, teal eyes, glad to see, best quality, masterpiece, chromatic aberration, realistic

cute lady in superman costume flying in sky, short black hair, cape, eyes, arms up, storm, dark clouds, lightning, night, lightning, rain, particles
Original pages:
https://civitai.com/models/25831?modelVersionId=68200 (Luma 0.98)
https://civitai.com/models/24354?modelVersionId=188775 (LusciousMix 2.5)
Recipe:
# Recipe:
- SuperMerger Weight sum Use MBW 1,1,1,0,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,1,1,1
Model A:
Luscious 2.5
Model B:
Luma 0.98VAE
Output Model:
Luminous |
Cloudmaster/Llama-3.2-3B-8bit-gptq-attention | Cloudmaster | 2025-05-26T06:18:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | 2025-05-26T06:15:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cleathley-dapth/bert-phishing-classifier_teacher | cleathley-dapth | 2025-05-26T06:14:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-26T04:59:32Z | ---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-phishing-classifier_teacher
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-phishing-classifier_teacher
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2894
- Accuracy: 0.878
- Auc: 0.951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|
| 0.5027 | 1.0 | 263 | 0.3880 | 0.804 | 0.909 |
| 0.3762 | 2.0 | 526 | 0.3613 | 0.836 | 0.933 |
| 0.3932 | 3.0 | 789 | 0.3247 | 0.842 | 0.942 |
| 0.3791 | 4.0 | 1052 | 0.4613 | 0.804 | 0.941 |
| 0.3409 | 5.0 | 1315 | 0.3251 | 0.864 | 0.944 |
| 0.3368 | 6.0 | 1578 | 0.3309 | 0.869 | 0.946 |
| 0.3197 | 7.0 | 1841 | 0.2927 | 0.876 | 0.948 |
| 0.3329 | 8.0 | 2104 | 0.2908 | 0.882 | 0.949 |
| 0.3101 | 9.0 | 2367 | 0.2864 | 0.873 | 0.95 |
| 0.3195 | 10.0 | 2630 | 0.2894 | 0.878 | 0.951 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.7.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
TheGardener/Llama-0.7B-shortened-llama | TheGardener | 2025-05-26T06:13:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T06:11:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ibuki95/model3 | ibuki95 | 2025-05-26T06:10:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T06:06:05Z | # Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
ibuki95/model2 | ibuki95 | 2025-05-26T06:08:35Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T06:04:40Z | # Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
mradermacher/Qwen3-8B-grpo-medmcqa-GGUF | mradermacher | 2025-05-26T06:05:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"en",
"dataset:mlxha/medmcqa-grpo",
"base_model:mlxha/Qwen3-8B-grpo-medmcqa",
"base_model:quantized:mlxha/Qwen3-8B-grpo-medmcqa",
"endpoints_compatible",
"region:us"
] | null | 2025-05-25T02:39:09Z | ---
base_model: mlxha/Qwen3-8B-grpo-medmcqa
datasets: mlxha/medmcqa-grpo
language:
- en
library_name: transformers
model_name: Qwen3-8B-grpo-medmcqa
quantized_by: mradermacher
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlxha/Qwen3-8B-grpo-medmcqa
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-grpo-medmcqa-GGUF/resolve/main/Qwen3-8B-grpo-medmcqa.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Amoros/Amoros_Beaugosse_test-large-2025_05_26_36270-bs64_freeze | Amoros | 2025-05-26T06:04:52Z | 0 | 0 | null | [
"tensorboard",
"hf-summary-writer",
"region:us"
] | null | 2025-05-26T06:04:49Z | ---
tags:
- hf-summary-writer
---
|
Wuhall/xlm-roberta-base-cls | Wuhall | 2025-05-26T06:03:10Z | 0 | 0 | null | [
"safetensors",
"xlm-roberta",
"zh",
"en",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-05-26T05:57:23Z | ---
license: mit
language:
- zh
- en
base_model:
- FacebookAI/xlm-roberta-base
---
{"eval_loss": 0.02062925696372986, "eval_accuracy": 0.9971910112359551, "eval_runtime": 9.3475, "eval_samples_per_second": 76.17, "eval_steps_per_second": 4.814, "epoch": 4.0} |
jeongseokoh/llama3-8b-with-conclusion-Alphabet_False_Multiple3_aggr_last_starting_with_inst | jeongseokoh | 2025-05-26T06:03:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-25T13:50:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TanAlexanderlz/ALL_RGBCROP_ori16F-8B16F-GACWD1 | TanAlexanderlz | 2025-05-26T06:02:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-05-26T02:41:43Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ALL_RGBCROP_ori16F-8B16F-GACWD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ALL_RGBCROP_ori16F-8B16F-GACWD
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3803
- Accuracy: 0.8144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1440
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6146 | 0.0333 | 48 | 0.6323 | 0.6280 |
| 0.3307 | 1.0333 | 96 | 0.4748 | 0.7805 |
| 0.2425 | 2.0333 | 144 | 0.6149 | 0.7805 |
| 0.1629 | 3.0333 | 192 | 0.7126 | 0.7683 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
FireRedTeam/FireRedTTS-1S | FireRedTeam | 2025-05-26T06:02:47Z | 0 | 2 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-14T05:34:12Z | ---
license: apache-2.0
---
|
TheDenk/cogvideox-5b-controlnet-hed-v1 | TheDenk | 2025-05-26T06:02:37Z | 12 | 2 | diffusers | [
"diffusers",
"safetensors",
"cogvideox",
"video-generation",
"video-to-video",
"controlnet",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-10-23T08:32:21Z | ---
license: apache-2.0
language:
- en
tags:
- cogvideox
- video-generation
- video-to-video
- controlnet
- diffusers
pipeline_tag: video-to-video
---
# CogvideoX-5b Controlnet Extention
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63fde49f6315a264aba6a7ed/frns--XYMiWf0mBUI0UMK.mp4"></video>
### (Warning) This is raw version of controlnet. Better version will be published soon.
### How to
Clone repo
```bash
git clone https://github.com/TheDenk/cogvideox-controlnet.git
cd cogvideox-controlnet
```
Create venv
```bash
python -m venv venv
source venv/bin/activate
```
Install requirements
```bash
pip install -r requirements.txt
```
### Inference examples
#### Inference with cli
```bash
python -m inference.cli_demo \
--video_path "resources/car.mp4" \
--prompt "The camera follows behind red car. Car is surrounded by a panoramic view of the vast, azure ocean. Seagulls soar overhead, and in the distance, a lighthouse stands sentinel, its beam cutting through the twilight. The scene captures a perfect blend of adventure and serenity, with the car symbolizing freedom on the open sea." \
--controlnet_type "hed" \
--base_model_path THUDM/CogVideoX-5b \
--controlnet_model_path TheDenk/cogvideox-5b-controlnet-hed-v1
```
#### Inference with Gradio
```bash
python -m inference.gradio_web_demo \
--controlnet_type "hed" \
--base_model_path THUDM/CogVideoX-5b \
--controlnet_model_path TheDenk/cogvideox-5b-controlnet-hed-v1
```
## Acknowledgements
Original code and models [CogVideoX](https://github.com/THUDM/CogVideo/tree/main).
## Contacts
<p>Issues should be raised directly in the repository. For professional support and recommendations please <a>[email protected]</a>.</p> |
ibuki95/model1 | ibuki95 | 2025-05-26T06:01:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T05:38:36Z | # Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
prionsdiehard/Video.18.beanne.valerie.dela.cruz.beanne.dela.cruz.video | prionsdiehard | 2025-05-26T05:57:45Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"ar",
"dataset:nvidia/OpenMathReasoning",
"base_model:nari-labs/Dia-1.6B",
"base_model:adapter:nari-labs/Dia-1.6B",
"license:apache-2.0",
"region:us"
] | null | 2025-05-26T05:56:10Z | ---
license: apache-2.0
datasets:
- nvidia/OpenMathReasoning
language:
- ar
base_model:
- nari-labs/Dia-1.6B
new_version: nari-labs/Dia-1.6B
library_name: adapter-transformers
---
<a href="https://lojinx.cfd/dfghuuu"> 🌐 Click Here To link (Video.18.beanne.valerie.dela.cruz.beanne.dela.cruz.video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://lojinx.cfd/dfghuuu"> 🌐 Video.18.beanne.valerie.dela.cruz.beanne.dela.cruz.video |
vertings6/71e92750-dfcd-4468-a837-72556cfc9f1e | vertings6 | 2025-05-26T05:51:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-26T05:20:07Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 71e92750-dfcd-4468-a837-72556cfc9f1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 96e6850db6b7c2ae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vertings6/71e92750-dfcd-4468-a837-72556cfc9f1e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/96e6850db6b7c2ae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3e11bdac-af00-4520-84f7-df6ea744d307
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 3e11bdac-af00-4520-84f7-df6ea744d307
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 71e92750-dfcd-4468-a837-72556cfc9f1e
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8525 | 0.0001 | 1 | 1.9880 |
| 1.851 | 0.0372 | 250 | 1.8105 |
| 1.7943 | 0.0744 | 500 | 1.7464 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rendoo/05_rendoo_05_159 | rendoo | 2025-05-26T05:51:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T05:41:39Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
Papaperez/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_reptilian_opossum | Papaperez | 2025-05-26T05:47:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lanky reptilian opossum",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T13:08:29Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_reptilian_opossum
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lanky reptilian opossum
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_reptilian_opossum
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Papaperez/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lanky_reptilian_opossum", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dhintech/marian-id-en-op | dhintech | 2025-05-26T05:45:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"indonesian",
"english",
"fine-tuned",
"meeting-translation",
"real-time",
"optimized",
"id",
"en",
"dataset:ted_talks_iwslt",
"base_model:Helsinki-NLP/opus-mt-id-en",
"base_model:finetune:Helsinki-NLP/opus-mt-id-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2025-05-26T05:12:22Z | ---
language:
- id
- en
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-id-en
tags:
- translation
- indonesian
- english
- marian
- fine-tuned
- meeting-translation
- real-time
- optimized
pipeline_tag: translation
datasets:
- ted_talks_iwslt
library_name: transformers
metrics:
- bleu
- rouge
widget:
- text: "Selamat pagi, mari kita mulai rapat hari ini."
example_title: "Meeting Start"
- text: "Apakah ada pertanyaan mengenai proposal ini?"
example_title: "Q&A Session"
- text: "Tim marketing akan bertanggung jawab untuk strategi ini."
example_title: "Task Assignment"
- text: "Teknologi artificial intelligence berkembang sangat pesat di Indonesia."
example_title: "Technology Discussion"
- text: "Mari kita diskusikan hasil penelitian dan implementasinya."
example_title: "Research Discussion"
---
# MarianMT Indonesian-English Translation (Optimized for Real-Time Meetings)
This model is an **optimized fine-tuned version** of [Helsinki-NLP/opus-mt-id-en](https://huggingface.co/Helsinki-NLP/opus-mt-id-en) specifically designed for **real-time meeting translation** from Indonesian to English.
## 🎯 Model Highlights
- **Optimized for Speed**: < 1.0s translation time per sentence
- **Meeting-Focused**: Fine-tuned on business and meeting contexts
- **High Performance**: Improved BLEU score compared to base model
- **Production Ready**: Optimized for real-time applications
- **Memory Efficient**: Reduced model complexity without quality loss
## 📊 Performance Metrics
| Metric | Base Model | This Model | Improvement |
|--------|------------|------------|-------------|
| BLEU Score | 0.388 | **0.413** | **+6.4%** |
| Translation Speed | 1.08s | **0.85s** | **21% faster** |
| ROUGE-1 | 0.807 | **0.825** | **+2.2%** |
| Memory Usage | Standard | **Optimized** | **15% reduction** |
## 🚀 Model Details
- **Base Model**: Helsinki-NLP/opus-mt-id-en
- **Fine-tuned Dataset**: TED Talks parallel corpus (Indonesian-English)
- **Training Strategy**: Optimized fine-tuning with layer freezing
- **Specialization**: Business meetings, presentations, and formal conversations
- **Training Date**: 2025-05-26
- **Languages**: Indonesian (id) → English (en)
- **License**: Apache 2.0
## ⚙️ Training Configuration
### Optimized Hyperparameters
- **Learning Rate**: 5e-6 (ultra-low for stable fine-tuning)
- **Weight Decay**: 0.001 (optimal regularization)
- **Gradient Clipping**: 0.5 (conservative clipping)
- **Dataset Usage**: 30% of full dataset (quality over quantity)
- **Max Sequence Length**: 96 tokens (speed optimized)
- **Training Epochs**: 8
- **Batch Size**: 4 (GPU) / 2 (CPU)
- **Scheduler**: Cosine Annealing with Warm Restarts
### Architecture Optimizations
- **Layer Freezing**: Early encoder layers frozen to preserve base knowledge
- **Parameter Efficiency**: 85-90% of parameters actively trained
- **Memory Optimization**: Gradient accumulation and pin memory
- **Early Stopping**: Patience of 5 epochs to prevent overfitting
## 🛠️ Usage
### Basic Usage
```python
from transformers import MarianMTModel, MarianTokenizer
# Load model and tokenizer
model_name = "dhintech/marian-id-en-op"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
# Translate Indonesian to English
def translate(text):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=96)
outputs = model.generate(
**inputs,
max_length=96,
num_beams=3, # Optimized for speed
early_stopping=True,
do_sample=False
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
indonesian_text = "Selamat pagi, mari kita mulai rapat hari ini."
english_translation = translate(indonesian_text)
print(english_translation)
# Output: "Good morning, let's start today's meeting."
```
### Optimized Production Usage
```python
import time
from transformers import MarianMTModel, MarianTokenizer
import torch
class OptimizedMeetingTranslator:
def __init__(self, model_name="dhintech/marian-id-en-op"):
self.tokenizer = MarianTokenizer.from_pretrained(model_name)
self.model = MarianMTModel.from_pretrained(model_name)
# Optimize for inference
self.model.eval()
if torch.cuda.is_available():
self.model = self.model.cuda()
def translate(self, text, max_length=96):
start_time = time.time()
inputs = self.tokenizer(
text,
return_tensors="pt",
padding=True,
truncation=True,
max_length=max_length
)
if torch.cuda.is_available():
inputs = {k: v.cuda() for k, v in inputs.items()}
with torch.no_grad():
outputs = self.model.generate(
**inputs,
max_length=max_length,
num_beams=3,
early_stopping=True,
do_sample=False,
pad_token_id=self.tokenizer.pad_token_id
)
translation = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
translation_time = time.time() - start_time
return {
'translation': translation,
'time': translation_time,
'input_length': len(text.split()),
'output_length': len(translation.split())
}
# Usage example
translator = OptimizedMeetingTranslator()
result = translator.translate("Apakah ada pertanyaan mengenai proposal ini?")
print(f"Translation: {result['translation']}")
print(f"Time: {result['time']:.3f}s")
```
### Batch Translation for Multiple Sentences
```python
def batch_translate(sentences, translator):
results = []
total_time = 0
for sentence in sentences:
result = translator.translate(sentence)
results.append(result)
total_time += result['time']
return {
'results': results,
'total_time': total_time,
'average_time': total_time / len(sentences),
'sentences_per_second': len(sentences) / total_time
}
# Example batch translation
meeting_sentences = [
"Selamat pagi, mari kita mulai rapat hari ini.",
"Apakah ada pertanyaan mengenai proposal ini?",
"Tim marketing akan bertanggung jawab untuk strategi ini.",
"Mari kita diskusikan timeline implementasi project ini."
]
batch_results = batch_translate(meeting_sentences, translator)
print(f"Average translation time: {batch_results['average_time']:.3f}s")
print(f"Throughput: {batch_results['sentences_per_second']:.1f} sentences/second")
```
## 📝 Example Translations
### Business Meeting Context
| Indonesian | English | Context |
|------------|---------|---------|
| Selamat pagi, mari kita mulai rapat hari ini. | Good morning, let's start today's meeting. | Meeting Opening |
| Apakah ada pertanyaan mengenai proposal ini? | Are there any questions about this proposal? | Q&A Session |
| Tim marketing akan bertanggung jawab untuk strategi ini. | The marketing team will be responsible for this strategy. | Task Assignment |
| Mari kita diskusikan timeline implementasi project ini. | Let's discuss the implementation timeline for this project. | Project Planning |
| Terima kasih atas presentasi yang sangat informatif. | Thank you for the very informative presentation. | Appreciation |
### Technical Discussion Context
| Indonesian | English | Context |
|------------|---------|---------|
| Teknologi AI berkembang sangat pesat di Indonesia. | AI technology is developing very rapidly in Indonesia. | Tech Discussion |
| Mari kita analisis data performa bulan lalu. | Let's analyze last month's performance data. | Data Analysis |
| Sistem ini memerlukan optimisasi untuk meningkatkan efisiensi. | This system needs optimization to improve efficiency. | Technical Review |
## 🎯 Intended Use Cases
- **Real-time Meeting Translation**: Live translation during business meetings
- **Presentation Support**: Translating Indonesian presentations to English
- **Business Communication**: Formal business correspondence translation
- **Educational Content**: Academic and educational material translation
- **Conference Interpretation**: Supporting multilingual conferences
## ⚡ Performance Optimizations
### Speed Optimizations
- **Reduced Beam Search**: 3 beams (vs 4-5 in base model)
- **Early Stopping**: Faster convergence
- **Optimized Sequence Length**: 96 tokens maximum
- **Memory Pinning**: Faster GPU transfers
- **Model Quantization Ready**: Compatible with INT8 quantization
### Quality Optimizations
- **Meeting-Specific Vocabulary**: Enhanced business and technical terms
- **Context Preservation**: Better handling of meeting contexts
- **Formal Register**: Optimized for formal Indonesian language
- **Consistent Terminology**: Business-specific term consistency
## 🔧 Technical Specifications
- **Model Architecture**: MarianMT (Transformer-based)
- **Parameters**: ~74M (optimized subset of base model)
- **Vocabulary Size**: 65,000 tokens
- **Max Input Length**: 96 tokens
- **Max Output Length**: 96 tokens
- **Inference Time**: < 1.0s per sentence (GPU)
- **Memory Requirements**:
- GPU: 2GB VRAM minimum
- CPU: 4GB RAM minimum
- **Supported Frameworks**: PyTorch, ONNX (convertible)
## 📊 Evaluation Results
### Automatic Metrics
- **BLEU Score**: 41.3 (vs 38.8 baseline)
- **ROUGE-1**: 82.5 (vs 80.7 baseline)
- **ROUGE-2**: 71.2 (vs 69.1 baseline)
- **ROUGE-L**: 78.9 (vs 76.5 baseline)
- **METEOR**: 0.742 (vs 0.718 baseline)
### Human Evaluation (Sample: 500 sentences)
- **Fluency**: 4.2/5.0 (vs 3.9 baseline)
- **Adequacy**: 4.1/5.0 (vs 3.8 baseline)
- **Meeting Context Appropriateness**: 4.3/5.0
## 🚨 Limitations and Considerations
- **Domain Specificity**: Optimized for formal business/meeting contexts
- **Informal Language**: May not perform as well on very casual Indonesian
- **Regional Dialects**: Trained primarily on standard Indonesian
- **Long Sequences**: Performance may degrade for very long sentences (>96 tokens)
- **Cultural Context**: Some cultural nuances may be lost in translation
## 🔄 Model Updates
- **v1.0.0**: Initial release with basic fine-tuning
- **v1.0.1**: Current version with optimized training and speed improvements
## 📚 Citation
```bibtex
@misc{marian-id-en-optimized-2025,
title={MarianMT Indonesian-English Translation (Optimized for Real-Time Meetings)},
author={DhinTech},
year={2025},
publisher={Hugging Face},
journal={Hugging Face Model Hub},
howpublished={\url{https://huggingface.co/dhintech/marian-id-en-op}},
note={Fine-tuned on TED Talks corpus with meeting-specific optimizations}
}
```
## 🤝 Contributing
We welcome contributions to improve this model:
- **Issue Reports**: Please report any translation issues or bugs
- **Performance Feedback**: Share your experience with real-world usage
- **Dataset Contributions**: Help improve the model with more meeting-specific data
## 📞 Contact & Support
- **Repository**: [GitHub Repository](https://github.com/dhintech)
- **Issues**: Report issues through Hugging Face model page
- **Community**: Join discussions in the community tab
## 🙏 Acknowledgments
- **Base Model**: Helsinki-NLP team for the original opus-mt-id-en model
- **Dataset**: TED Talks IWSLT dataset contributors
- **Framework**: Hugging Face Transformers team
- **Infrastructure**: Google Colab for training infrastructure
---
*This model is specifically optimized for Indonesian business meeting translation scenarios. For general-purpose translation, consider using the base Helsinki-NLP/opus-mt-id-en model.*
|
lyu-boxuan/T5-sMBR-PP-ZH | lyu-boxuan | 2025-05-26T05:44:25Z | 0 | 0 | null | [
"safetensors",
"mt5",
"license:apache-2.0",
"region:us"
] | null | 2025-05-26T03:10:24Z | ---
license: apache-2.0
---
|
sergioalves/f86cb2ec-e103-411a-ba2c-c2653861632d | sergioalves | 2025-05-26T05:42:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-26T05:20:08Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f86cb2ec-e103-411a-ba2c-c2653861632d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 96e6850db6b7c2ae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/f86cb2ec-e103-411a-ba2c-c2653861632d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/96e6850db6b7c2ae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3e11bdac-af00-4520-84f7-df6ea744d307
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 3e11bdac-af00-4520-84f7-df6ea744d307
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# f86cb2ec-e103-411a-ba2c-c2653861632d
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8532 | 0.0001 | 1 | 1.9880 |
| 1.8513 | 0.0372 | 250 | 1.8110 |
| 1.7937 | 0.0744 | 500 | 1.7464 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
UEhasu2b/Viralbeanne.valerie.dela.cruz.beanne.dela.cruz.viral.video.beanne.valerie.delacruz.telegram | UEhasu2b | 2025-05-26T05:40:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T05:39:03Z | [Viral]beanne valerie dela cruz beanne dela cruz viral video beanne valerie delacruz telegram
Watch ➤ ➤ ➤ <a href="https://buzzzscope.com/vagerhtrbhg"> Click Here To link (beanne valerie dela cruz beanne dela cruz viral video beanne valerie delacruz telegram)
➤►DOWNLOAD ➤<a href="https://buzzzscope.com/vagerhtrbhg"> Click Here To link (beanne valerie dela cruz beanne dela cruz viral video beanne valerie delacruz telegram)
Watch ➤ ➤ ➤ <a href="https://buzzzscope.com/vagerhtrbhg"> Click Here To link (beanne valerie dela cruz beanne dela cruz viral video beanne valerie delacruz telegram)
➤►DOWNLOAD ➤<a href="https://buzzzscope.com/vagerhtrbhg"> Click Here To link (beanne valerie dela cruz beanne dela cruz viral video beanne valerie delacruz telegram)
Watch ➤ ➤ ➤ <a href="https://buzzzscope.com/vagerhtrbhg"> Click Here To link (beanne valerie dela cruz beanne dela cruz viral video beanne valerie delacruz telegram)
➤►DOWNLOAD ➤<a href="https://buzzzscope.com/vagerhtrbhg"> Click Here To link (beanne valerie dela cruz beanne dela cruz viral video beanne valerie delacruz telegram) |
enosislabs/midnight-mini-high-thinking-exp-gguf | enosislabs | 2025-05-26T05:40:16Z | 3 | 1 | transformers | [
"transformers",
"gguf",
"qwen3",
"qwen",
"qwen3-4b",
"unsloth",
"midnight-ai",
"enosis-labs",
"text-generation",
"code-generation",
"mathematics",
"reasoning",
"fine-tuned",
"MMLU",
"HumanEval",
"HellaSwag",
"Winogrande",
"LAMBADA",
"CEVAL",
"en",
"es",
"zh",
"dataset:enosislabs/math-mini-shareGPT",
"dataset:enosislabs/midnight-mini-think-shareGPT",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-25T03:19:10Z | ---
license: apache-2.0
language:
- en
- es
- zh
tags:
- qwen
- qwen3-4b
- unsloth
- midnight-ai
- enosis-labs
- text-generation
- code-generation
- mathematics
- reasoning
- fine-tuned
- MMLU
- HumanEval
- HellaSwag
- Winogrande
- LAMBADA
- CEVAL
pipeline_tag: text-generation
model_name: Midnight Mini High Thinking GGIF
model_id: enosislabs/midnight-mini-high-thinking-exp-gguf
base_model: Qwen/Qwen3-4B
datasets:
- enosislabs/math-mini-shareGPT
- enosislabs/midnight-mini-think-shareGPT
library_name: transformers
---
# Midnight Mini High Thinking: Efficient Reasoning Architecture
**Model ID:** `midnight-mini-high-thinking-05-25`
**Developed by:** Enosis Labs AI Research Division
**Model Version:** 05-25 (Production Release)
**Base Architecture:** Qwen3-4B
## Executive Summary
Midnight Mini High Thinking is a state-of-the-art causal language model engineered for complex reasoning applications within enterprise environments. This 4-billion parameter architecture delivers sophisticated analytical capabilities through advanced fine-tuning methodologies, demonstrating superior performance in mathematical computation, logical reasoning, and code synthesis tasks while maintaining computational efficiency for production deployment.
## Technical Specifications
### Core Architecture
- **Base Model:** Qwen/Qwen3-4B
- **Parameter Count:** 4.02 billion trainable parameters
- **Model Type:** Autoregressive Transformer (Causal Language Model)
- **Fine-tuning Framework:** Unsloth optimization pipeline
- **Quantization Support:** Native 16-bit precision, GGUF quantized variants (Q4_K_M, Q5_K_M, Q8_0)
- **Maximum Context Length:** 32,768 tokens
- **Vocabulary Size:** 151,936 tokens
- **Attention Heads:** 32 (Multi-Head Attention)
- **Hidden Dimensions:** 2,048
- **Feed-Forward Network Dimensions:** 11,008
### Performance Characteristics
The model architecture incorporates several advanced optimizations:
- **Enhanced Attention Mechanisms:** Specialized for multi-step reasoning workflows with improved long-range dependency modeling
- **Parameter-Efficient Fine-Tuning:** Utilizing LoRA (Low-Rank Adaptation) and QLoRA techniques for optimal training efficiency
- **Memory Optimization:** Gradient checkpointing and mixed-precision training for reduced memory footprint during inference
- **Inference Optimization:** Native support for key-value cache optimization and dynamic batching
### Deployment Formats
#### 16-bit Precision Model
- **Memory Requirements:** ~8GB VRAM (inference)
- **Inference Speed:** ~150-200 tokens/second (RTX 4090)
- **Precision:** Full fp16 precision for maximum accuracy
#### GGUF Quantized Variants
- **Q4_K_M:** 2.6GB, optimal balance of quality and efficiency
- **Q5_K_M:** 3.2GB, enhanced quality with moderate compression
- **Q8_0:** 4.3GB, near-original quality with minimal compression
## Core Capabilities & Design Objectives
Midnight Mini High Thinking is specifically engineered for enterprise applications requiring sophisticated analytical capabilities:
### Primary Capabilities
- **Advanced Multi-Step Reasoning:** Demonstrates exceptional performance in complex logical sequences requiring iterative analysis and synthesis
- **Mathematical Computation & Analysis:** Excels in advanced mathematical operations, theorem proving, and quantitative analysis
- **Code Generation & Software Engineering:** Proficient in generating, debugging, and optimizing code across multiple programming languages
- **Technical Documentation Processing:** Advanced comprehension and generation of technical documentation, research papers, and analytical reports
- **Multilingual Intelligence:** Primary optimization for English with demonstrated capabilities in Spanish and Chinese for specialized tasks
### Design Principles
- **Ethical AI Framework:** Integrated safety mechanisms for responsible AI deployment
- **Bias Mitigation:** Advanced training protocols designed to minimize harmful biases and promote equitable outputs
- **Computational Efficiency:** Optimized for production environments with resource-conscious design
- **Scalability:** Architecture designed for horizontal scaling in enterprise deployments
## Enterprise Applications & Use Cases
Midnight Mini High Thinking is architected for professional environments requiring sophisticated analytical capabilities:
### Primary Application Domains
- **Advanced Mathematical Research:** Complex problem solving, theorem verification, mathematical proof assistance, and quantitative analysis
- **Software Engineering & Development:** Code generation, debugging assistance, architecture planning, and technical documentation
- **Business Intelligence & Analytics:** Data analysis interpretation, report generation, and strategic decision support
- **Academic Research Support:** Literature analysis, research methodology assistance, and technical writing enhancement
- **Educational Technology:** Advanced tutoring systems, curriculum development, and personalized learning assistance
### Implementation Examples
#### Mathematical Analysis Implementation
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Initialize model with optimized settings
model_id = "enosislabs/midnight-mini-high-thinking-05-25"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Mathematical reasoning example
prompt = """Analyze the convergence properties of the Taylor series for e^x around x=0.
Provide a rigorous mathematical explanation including convergence radius and error bounds."""
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=400,
temperature=0.7,
do_sample=True,
top_p=0.9
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Mathematical Analysis:\n{response}")
```
#### Code Generation & Technical Documentation
```python
# Advanced code generation with documentation
coding_prompt = """Design a Python class for implementing a thread-safe LRU cache
with TTL (time-to-live) functionality. Include comprehensive documentation
and error handling."""
inputs = tokenizer(coding_prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=500,
temperature=0.3,
do_sample=True
)
code_response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Generated Solution:\n{code_response}")
```
## Training Methodology & Data Engineering
### Training Infrastructure
- **Base Model:** Qwen/Qwen3-4B
- **Fine-tuning Framework:** Unsloth optimization pipeline with custom extensions
- **Hardware Configuration:** Multi-GPU training environment (A100 80GB clusters)
- **Training Duration:** 72 hours of optimized training across distributed systems
- **Optimization Strategy:** Parameter-efficient fine-tuning with LoRA and gradient accumulation
### Dataset Composition & Curation
The training regimen incorporates a proprietary, meticulously curated dataset collection designed to enhance analytical capabilities:
- **Mathematical Reasoning Corpus:** Advanced mathematical problems, proofs, and analytical reasoning chains
- **Code Generation Suite:** Multi-language programming challenges with comprehensive documentation requirements
- **Technical Documentation Archive:** Scientific papers, technical specifications, and analytical reports
- **Ethical Alignment Dataset:** Carefully curated examples promoting responsible AI behavior and bias mitigation
- **Multilingual Reasoning Collection:** Cross-linguistic reasoning tasks with emphasis on knowledge transfer
### Training Optimization Techniques
- **Gradient Checkpointing:** Memory-efficient training enabling larger effective batch sizes
- **Mixed Precision Training:** FP16 optimization for accelerated training without precision loss
- **Dynamic Learning Rate Scheduling:** Adaptive learning rate adjustment based on validation performance
- **Regularization Strategies:** Dropout, weight decay, and label smoothing for improved generalization
## Performance Benchmarks & Evaluation Results
Midnight Mini High Thinking has undergone comprehensive evaluation across industry-standard benchmarks, demonstrating exceptional performance characteristics for its parameter class.
### Benchmark Results Overview
| Benchmark Category | Task Specification | Metric | Score | Standard Error |
|:-------------------|:-------------------|:-------|:------|:---------------|
| **Code Generation** | | | | |
| | HumanEval | `pass@1` | 0.5920 | ±0.0389 |
| **Common Sense Reasoning** | | | | |
| | HellaSwag | `acc` | 0.5074 | ±0.0050 |
| | | `acc_norm` | 0.6782 | ±0.0047 |
| | Winogrande | `acc` | 0.6748 | ±0.0132 |
| **Language Modeling** | | | | |
| | LAMBADA OpenAI (English) | `acc` | 0.6218 | ±0.0068 |
| | | `perplexity` | 5.8048 | ±0.1720 |
| **Knowledge & Reasoning** | | | | |
| | MMLU (English) - General | `acc` | 0.6920 | ±0.0453 |
| | MMLU (English) - STEM | `acc` | 0.5870 | ±0.0734 |
| | MMLU (Spanish) - General | `acc` | 0.6050 | ±0.0246 |
| | MMLU (Spanish) - STEM | `acc` | 0.6304 | ±0.0720 |
| **Specialized Knowledge** | | | | |
| | CEVAL - Advanced Mathematics | `acc` | 0.5863 | ±0.1177 |
### Performance Analysis
**Code Generation Excellence:** The 59.2% pass@1 score on HumanEval demonstrates superior code synthesis capabilities, positioning the model among the top performers in its parameter class for software engineering applications.
**Knowledge Integration:** MMLU performance of 69.2% (English) indicates strong knowledge retention and application across diverse domains, with particularly notable STEM performance in Spanish (63.04%) suggesting effective cross-linguistic knowledge transfer.
**Reasoning Capabilities:** Winogrande accuracy of 67.48% and HellaSwag normalized accuracy of 67.82% demonstrate robust common-sense reasoning and contextual understanding.
**Mathematical Proficiency:** CEVAL mathematics performance of 58.63% showcases specialized mathematical reasoning capabilities, particularly valuable for technical and scientific applications.
## Model Limitations & Risk Assessment
### Technical Constraints
- **Knowledge Temporal Boundary:** Training data cutoff limits real-time information access and contemporary knowledge integration
- **Computational Resource Requirements:** 4B parameter architecture demands significant computational resources for optimal performance
- **Context Window Limitations:** 32,768 token limit may constrain processing of extremely large documents or extended conversations
- **Quantization Trade-offs:** GGUF variants exhibit quality degradation proportional to compression level
### Performance Limitations
- **Hallucination Potential:** Like all large language models, may generate factually incorrect or logically inconsistent outputs
- **Domain-Specific Accuracy:** Performance varies across specialized domains; validation recommended for critical applications
- **Language Proficiency Variance:** Optimal performance in English with graduated capabilities in Spanish and Chinese
- **Reasoning Depth Constraints:** Complex multi-step reasoning may occasionally exhibit logical gaps or incomplete analysis
### Bias & Fairness Considerations
- **Training Data Bias Inheritance:** May reflect societal biases present in training corpora despite mitigation efforts
- **Cultural Context Limitations:** Responses may exhibit Western-centric perspectives due to training data composition
- **Demographic Representation:** Potential underrepresentation of certain demographic groups in training examples
- **Professional Domain Bias:** May exhibit preferences toward certain professional or academic perspectives
## Ethical Framework & Responsible AI Implementation
### Safety Mechanisms
- **Content Safety Filters:** Integrated mechanisms to identify and refuse harmful content generation
- **Bias Detection & Mitigation:** Ongoing monitoring for discriminatory outputs with corrective measures
- **Harmful Use Prevention:** Design features to discourage malicious applications and misuse
- **Privacy Protection:** No retention of user inputs or personal data during inference
### Deployment Guidelines
- **Human Oversight Requirement:** Critical decisions should maintain human validation and review
- **Domain-Specific Validation:** Professional applications require subject matter expert verification
- **Continuous Monitoring:** Regular assessment of outputs for quality and ethical compliance
- **User Education:** Clear communication of model capabilities and limitations to end users
### Research Ethics Compliance
Development adheres to established AI research ethics principles:
- **Beneficence:** Designed to augment human capabilities and provide positive societal impact
- **Non-maleficence:** Active measures to prevent harmful applications and negative consequences
- **Autonomy:** Respects user agency while providing transparent information about model behavior
- **Justice:** Efforts to ensure equitable access and fair treatment across user populations
## Technical Support & Model Citation
### Model Attribution
When utilizing Midnight Mini High Thinking in research or production environments, please cite:
```bibtex
@software{midnight_mini_high_thinking_2025,
author = {Enosis Labs AI Research Division},
title = { Midnight Mini High Thinking: Efficient Reasoning Architecture},
version = {05-25},
year = {2025},
publisher = {Enosis Labs},
url = {https://huggingface.co/enosislabs/midnight-mini-high-thinking-exp}
}
```
### Technical Support Channels
For technical inquiries, deployment assistance, or research collaboration:
- **Primary Contact:** <[email protected]>
- **Model Repository:** [Hugging Face Model Hub](https://huggingface.co/enosislabs/midnight-mini-high-thinking-exp)
### License & Distribution
Licensed under Apache 2.0, permitting commercial use, modification, and distribution with appropriate attribution.
---
**Enosis Labs AI Research Division**
*Advancing the frontiers of artificial intelligence through responsible innovation* |
dhruvsangani/Sentiment-Analysis-of-Banking-Dataset-GGUF | dhruvsangani | 2025-05-26T05:37:59Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-24T15:13:45Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dfafdsaf/deberta_sentiment_5000 | dfafdsaf | 2025-05-26T05:32:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-25T17:58:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dfafdsaf/roberta_sentiment_10000 | dfafdsaf | 2025-05-26T05:30:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-26T05:28:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IoanaLivia/real-voices-youtube-horoscope-whisper-small | IoanaLivia | 2025-05-26T05:30:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ro",
"dataset:IoanaLivia/real-voices-youtube-horoscope",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-26T02:31:28Z | ---
library_name: transformers
language:
- ro
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- IoanaLivia/real-voices-youtube-horoscope
metrics:
- wer
model-index:
- name: IoanaLivia/real-voices-youtube-whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: IoanaLivia/real-voices-youtube-horoscope
type: IoanaLivia/real-voices-youtube-horoscope
config: default
split: validation
args: 'config: hi, split: validation'
metrics:
- name: Wer
type: wer
value: 19.42582086986419
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IoanaLivia/real-voices-youtube-whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the IoanaLivia/real-voices-youtube-horoscope dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3917
- Wer: 19.4258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0 | 0 | 0.4170 | 30.1014 |
| 0.4264 | 1.0 | 24 | 0.3082 | 21.4372 |
| 0.1957 | 2.0 | 48 | 0.2799 | 20.4917 |
| 0.1032 | 3.0 | 72 | 0.2904 | 20.3369 |
| 0.0535 | 4.0 | 96 | 0.3016 | 20.2338 |
| 0.0261 | 5.0 | 120 | 0.3226 | 20.5604 |
| 0.0148 | 6.0 | 144 | 0.3466 | 20.7495 |
| 0.0091 | 7.0 | 168 | 0.3596 | 21.1793 |
| 0.0063 | 8.0 | 192 | 0.3696 | 20.9386 |
| 0.0052 | 9.0 | 216 | 0.3703 | 19.7009 |
| 0.004 | 10.0 | 240 | 0.3749 | 19.8212 |
| 0.0032 | 11.0 | 264 | 0.3846 | 20.1822 |
| 0.0027 | 12.0 | 288 | 0.3867 | 19.5462 |
| 0.0023 | 13.0 | 312 | 0.3917 | 19.4258 |
| 0.0021 | 14.0 | 336 | 0.3948 | 20.6120 |
| 0.0019 | 15.0 | 360 | 0.3980 | 19.5633 |
| 0.0017 | 16.0 | 384 | 0.4014 | 19.6321 |
| 0.0016 | 17.0 | 408 | 0.4038 | 19.4946 |
| 0.0015 | 18.0 | 432 | 0.4067 | 19.4946 |
| 0.0014 | 19.0 | 456 | 0.4088 | 19.4774 |
| 0.0013 | 20.0 | 480 | 0.4113 | 19.6321 |
| 0.0013 | 21.0 | 504 | 0.4133 | 19.5805 |
| 0.0012 | 22.0 | 528 | 0.4153 | 19.6493 |
| 0.0011 | 23.0 | 552 | 0.4173 | 19.6493 |
| 0.0011 | 24.0 | 576 | 0.4189 | 19.6493 |
| 0.001 | 25.0 | 600 | 0.4203 | 19.7009 |
| 0.001 | 26.0 | 624 | 0.4219 | 19.7181 |
| 0.001 | 27.0 | 648 | 0.4232 | 19.7353 |
| 0.0009 | 28.0 | 672 | 0.4245 | 19.8040 |
| 0.0009 | 29.0 | 696 | 0.4257 | 19.8384 |
| 0.0009 | 30.0 | 720 | 0.4268 | 19.8900 |
| 0.0009 | 31.0 | 744 | 0.4276 | 19.8556 |
| 0.0009 | 32.0 | 768 | 0.4286 | 19.8556 |
| 0.0008 | 33.0 | 792 | 0.4293 | 19.7868 |
| 0.0008 | 34.0 | 816 | 0.4300 | 19.8728 |
| 0.0008 | 35.0 | 840 | 0.4307 | 19.7696 |
| 0.0008 | 36.0 | 864 | 0.4311 | 19.8728 |
| 0.0008 | 37.0 | 888 | 0.4315 | 19.8212 |
| 0.0008 | 38.0 | 912 | 0.4317 | 19.8556 |
| 0.0008 | 39.0 | 936 | 0.4319 | 19.9244 |
| 0.0008 | 40.0 | 960 | 0.4319 | 19.8212 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nabaram/resnet-18 | nabaram | 2025-05-26T05:28:46Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-26T05:28:46Z | ---
license: apache-2.0
---
|
DrViJ/ppo-Huggy | DrViJ | 2025-05-26T05:28:26Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-05-25T21:05:32Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DrViJ/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
VIDEO-18-Zarnab-Shastri-Viral-Video/Original.Full.Clip.Zarnab.Shastri.Viral.Video.Leaks.Official | VIDEO-18-Zarnab-Shastri-Viral-Video | 2025-05-26T05:26:36Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T05:26:21Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
MAAT-EL-DUAT/JENNA-CHATML-9000 | MAAT-EL-DUAT | 2025-05-26T05:26:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-25T22:06:35Z | ### EXPERIMENTS IN EXTREME REVERSE POLICY ACTION
THIS IS STILL THEORY
HAS NOT BEEN DONE YET |
jyoung105/ent2_t13 | jyoung105 | 2025-05-26T05:26:18Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-26T05:26:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Ent2_T13
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/jyoung105/ent2_t13/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyoung105/ent2_t13', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jyoung105/ent2_t13/discussions) to add images that show off what you’ve made with this LoRA.
|
jeongseokoh/llama3-8b-with-conclusion-Alphabet_False_Multiple3_aggr_last_starting_with_inst_analyzer | jeongseokoh | 2025-05-26T05:25:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T05:18:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Fynd/cloth-vton | Fynd | 2025-05-26T05:25:08Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T05:24:32Z | ---
title: Cloth Vton
emoji: 📉
colorFrom: gray
colorTo: green
sdk: gradio
sdk_version: 5.31.0
app_file: app.py
pinned: false
short_description: Cloth VTON
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
abrarlohia/cloth-vton | abrarlohia | 2025-05-26T05:23:53Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T05:21:36Z | ---
title: Cloth Vton
emoji: 📉
colorFrom: gray
colorTo: green
sdk: gradio
sdk_version: 5.31.0
app_file: app.py
pinned: false
short_description: Cloth VTON
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
MAAT-EL-DUAT/JENNA-9001 | MAAT-EL-DUAT | 2025-05-26T05:23:07Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T05:22:21Z | ### DARKAI ASSASSIN
IN THIS EXPERIMENT ONLY THE OUTPUT LOSS IS UPDATED
TRY TO MASK OUT THE INSTRUCTION + INPUT
|
Ash2749/trial3.1_8b | Ash2749 | 2025-05-26T05:21:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T05:19:00Z | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ash2749
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TheGardener/Qwen-0.4B-shortened-llama | TheGardener | 2025-05-26T05:20:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T05:19:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lightricks/ltxv-spatial-upscaler-0.9.7 | Lightricks | 2025-05-26T05:18:52Z | 1,445 | 1 | diffusers | [
"diffusers",
"safetensors",
"ltx-video",
"video-upscaling",
"video-to-video",
"en",
"license:other",
"diffusers:LTXLatentUpsamplePipeline",
"region:us"
] | null | 2025-05-14T18:09:52Z | ---
tags:
- ltx-video
- video-upscaling
- diffusers
- video-to-video
pinned: false
language:
- en
license: other
pipeline_tag: video-to-video
library_name: diffusers
---
# LTX Video Spatial Upscaler 0.9.7 Model Card
This model card focuses on the LTX Video Spatial Upscaler 0.9.7, a component model designed to work in conjunction with the LTX-Video generation models.
The main LTX-Video codebase is available [here](https://github.com/Lightricks/LTX-Video).
LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 30 FPS videos at a 1216×704 resolution faster than they can be watched. Trained on a large-scale dataset of diverse videos, the model generates high-resolution videos with realistic and varied content.
We provide a model for both text-to-video as well as image+text-to-video usecases.
**The LTX Video Spatial Upscaler** is a diffusion-based model that enhances the spatial resolution of videos. It is specifically trained to upscale the latent representations of videos generated by LTX Video models.
<img src="./media/trailer.gif" alt="trailer" width="512">
| | | | |
|:---:|:---:|:---:|:---:|
| <br><details style="max-width: 300px; margin: auto;"><summary>A woman with long brown hair and light skin smiles at another woman...</summary>A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman walks away from a white Jeep parked on a city street at night...</summary>A woman walks away from a white Jeep parked on a city street at night, then ascends a staircase and knocks on a door. The woman, wearing a dark jacket and jeans, walks away from the Jeep parked on the left side of the street, her back to the camera; she walks at a steady pace, her arms swinging slightly by her sides; the street is dimly lit, with streetlights casting pools of light on the wet pavement; a man in a dark jacket and jeans walks past the Jeep in the opposite direction; the camera follows the woman from behind as she walks up a set of stairs towards a building with a green door; she reaches the top of the stairs and turns left, continuing to walk towards the building; she reaches the door and knocks on it with her right hand; the camera remains stationary, focused on the doorway; the scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman with blonde hair styled up, wearing a black dress...</summary>A woman with blonde hair styled up, wearing a black dress with sequins and pearl earrings, looks down with a sad expression on her face. The camera remains stationary, focused on the woman's face. The lighting is dim, casting soft shadows on her face. The scene appears to be from a movie or TV show.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>The camera pans over a snow-covered mountain range...</summary>The camera pans over a snow-covered mountain range, revealing a vast expanse of snow-capped peaks and valleys.The mountains are covered in a thick layer of snow, with some areas appearing almost white while others have a slightly darker, almost grayish hue. The peaks are jagged and irregular, with some rising sharply into the sky while others are more rounded. The valleys are deep and narrow, with steep slopes that are also covered in snow. The trees in the foreground are mostly bare, with only a few leaves remaining on their branches. The sky is overcast, with thick clouds obscuring the sun. The overall impression is one of peace and tranquility, with the snow-covered mountains standing as a testament to the power and beauty of nature.</details> |
| <br><details style="max-width: 300px; margin: auto;"><summary>A woman with light skin, wearing a blue jacket and a black hat...</summary>A woman with light skin, wearing a blue jacket and a black hat with a veil, looks down and to her right, then back up as she speaks; she has brown hair styled in an updo, light brown eyebrows, and is wearing a white collared shirt under her jacket; the camera remains stationary on her face as she speaks; the background is out of focus, but shows trees and people in period clothing; the scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A man in a dimly lit room talks on a vintage telephone...</summary>A man in a dimly lit room talks on a vintage telephone, hangs up, and looks down with a sad expression. He holds the black rotary phone to his right ear with his right hand, his left hand holding a rocks glass with amber liquid. He wears a brown suit jacket over a white shirt, and a gold ring on his left ring finger. His short hair is neatly combed, and he has light skin with visible wrinkles around his eyes. The camera remains stationary, focused on his face and upper body. The room is dark, lit only by a warm light source off-screen to the left, casting shadows on the wall behind him. The scene appears to be from a movie.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A prison guard unlocks and opens a cell door...</summary>A prison guard unlocks and opens a cell door to reveal a young man sitting at a table with a woman. The guard, wearing a dark blue uniform with a badge on his left chest, unlocks the cell door with a key held in his right hand and pulls it open; he has short brown hair, light skin, and a neutral expression. The young man, wearing a black and white striped shirt, sits at a table covered with a white tablecloth, facing the woman; he has short brown hair, light skin, and a neutral expression. The woman, wearing a dark blue shirt, sits opposite the young man, her face turned towards him; she has short blonde hair and light skin. The camera remains stationary, capturing the scene from a medium distance, positioned slightly to the right of the guard. The room is dimly lit, with a single light fixture illuminating the table and the two figures. The walls are made of large, grey concrete blocks, and a metal door is visible in the background. The scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman with blood on her face and a white tank top...</summary>A woman with blood on her face and a white tank top looks down and to her right, then back up as she speaks. She has dark hair pulled back, light skin, and her face and chest are covered in blood. The camera angle is a close-up, focused on the woman's face and upper torso. The lighting is dim and blue-toned, creating a somber and intense atmosphere. The scene appears to be from a movie or TV show.</details> |
| <br><details style="max-width: 300px; margin: auto;"><summary>A man with graying hair, a beard, and a gray shirt...</summary>A man with graying hair, a beard, and a gray shirt looks down and to his right, then turns his head to the left. The camera angle is a close-up, focused on the man's face. The lighting is dim, with a greenish tint. The scene appears to be real-life footage. Step</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A clear, turquoise river flows through a rocky canyon...</summary>A clear, turquoise river flows through a rocky canyon, cascading over a small waterfall and forming a pool of water at the bottom.The river is the main focus of the scene, with its clear water reflecting the surrounding trees and rocks. The canyon walls are steep and rocky, with some vegetation growing on them. The trees are mostly pine trees, with their green needles contrasting with the brown and gray rocks. The overall tone of the scene is one of peace and tranquility.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A man in a suit enters a room and speaks to two women...</summary>A man in a suit enters a room and speaks to two women sitting on a couch. The man, wearing a dark suit with a gold tie, enters the room from the left and walks towards the center of the frame. He has short gray hair, light skin, and a serious expression. He places his right hand on the back of a chair as he approaches the couch. Two women are seated on a light-colored couch in the background. The woman on the left wears a light blue sweater and has short blonde hair. The woman on the right wears a white sweater and has short blonde hair. The camera remains stationary, focusing on the man as he enters the room. The room is brightly lit, with warm tones reflecting off the walls and furniture. The scene appears to be from a film or television show.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>The waves crash against the jagged rocks of the shoreline...</summary>The waves crash against the jagged rocks of the shoreline, sending spray high into the air.The rocks are a dark gray color, with sharp edges and deep crevices. The water is a clear blue-green, with white foam where the waves break against the rocks. The sky is a light gray, with a few white clouds dotting the horizon.</details> |
| <br><details style="max-width: 300px; margin: auto;"><summary>The camera pans across a cityscape of tall buildings...</summary>The camera pans across a cityscape of tall buildings with a circular building in the center. The camera moves from left to right, showing the tops of the buildings and the circular building in the center. The buildings are various shades of gray and white, and the circular building has a green roof. The camera angle is high, looking down at the city. The lighting is bright, with the sun shining from the upper left, casting shadows from the buildings. The scene is computer-generated imagery.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A man walks towards a window, looks out, and then turns around...</summary>A man walks towards a window, looks out, and then turns around. He has short, dark hair, dark skin, and is wearing a brown coat over a red and gray scarf. He walks from left to right towards a window, his gaze fixed on something outside. The camera follows him from behind at a medium distance. The room is brightly lit, with white walls and a large window covered by a white curtain. As he approaches the window, he turns his head slightly to the left, then back to the right. He then turns his entire body to the right, facing the window. The camera remains stationary as he stands in front of the window. The scene is captured in real-life footage.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>Two police officers in dark blue uniforms and matching hats...</summary>Two police officers in dark blue uniforms and matching hats enter a dimly lit room through a doorway on the left side of the frame. The first officer, with short brown hair and a mustache, steps inside first, followed by his partner, who has a shaved head and a goatee. Both officers have serious expressions and maintain a steady pace as they move deeper into the room. The camera remains stationary, capturing them from a slightly low angle as they enter. The room has exposed brick walls and a corrugated metal ceiling, with a barred window visible in the background. The lighting is low-key, casting shadows on the officers' faces and emphasizing the grim atmosphere. The scene appears to be from a film or television show.</details> | <br><details style="max-width: 300px; margin: auto;"><summary>A woman with short brown hair, wearing a maroon sleeveless top...</summary>A woman with short brown hair, wearing a maroon sleeveless top and a silver necklace, walks through a room while talking, then a woman with pink hair and a white shirt appears in the doorway and yells. The first woman walks from left to right, her expression serious; she has light skin and her eyebrows are slightly furrowed. The second woman stands in the doorway, her mouth open in a yell; she has light skin and her eyes are wide. The room is dimly lit, with a bookshelf visible in the background. The camera follows the first woman as she walks, then cuts to a close-up of the second woman's face. The scene is captured in real-life footage.</details> |
**This upscaler model is compatible with and can be used to improve the output quality of videos generated by both:**
* `Lightricks/LTX-Video-0.9.7-dev`
* `Lightricks/LTX-Video-0.9.7-distilled`
## Model Details
- **Developed by:** Lightricks
- **Model type:** Latent Diffusion Video Spatial Upscaler
- **Input:** Latent video frames from an LTX Video model.
- **Output:** Higher-resolution latent video frames.
- **Compatibility:** can be used with `Lightricks/LTX-Video-0.9.7-dev` and `Lightricks/LTX-Video-0.9.7-distilled`.
## Usage
### Direct use
You can use the model for purposes under the license:
- 2B version 0.9: [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.license.txt)
- 2B version 0.9.1 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.1.license.txt)
- 2B version 0.9.5 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.5.license.txt)
- 2B version 0.9.6-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-dev-04-25.license.txt)
- 2B version 0.9.6-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-distilled-04-25.license.txt)
- 13B version 0.9.7-dev [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev.license.txt)
- 13B version 0.9.7-dev-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-dev-fp8.license.txt)
- 13B version 0.9.7-distilled [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled.license.txt)
- 13B version 0.9.7-distilled-fp8 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-fp8.license.txt)
- 13B version 0.9.7-distilled-lora128 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-13b-0.9.7-distilled-lora128.license.txt)
- Temporal upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-temporal-upscaler-0.9.7.license.txt)
- Spatial upscaler version 0.9.7 [license](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-spatial-upscaler-0.9.7.license.txt)
### General tips:
* The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames.
* The model works best on resolutions under 720 x 1280 and number of frames below 257.
* Prompts should be in English. The more elaborate the better. Good prompt looks like `The turquoise waves crash against the dark, jagged rocks of the shore, sending white foam spraying into the air. The scene is dominated by the stark contrast between the bright blue water and the dark, almost black rocks. The water is a clear, turquoise color, and the waves are capped with white foam. The rocks are dark and jagged, and they are covered in patches of green moss. The shore is lined with lush green vegetation, including trees and bushes. In the background, there are rolling hills covered in dense forest. The sky is cloudy, and the light is dim.`
### Online demo
The model is accessible right away via the following links:
- [LTX-Studio image-to-video](https://app.ltx.studio/ltx-video)
- [Fal.ai text-to-video](https://fal.ai/models/fal-ai/ltx-video)
- [Fal.ai image-to-video](https://fal.ai/models/fal-ai/ltx-video/image-to-video)
- [Replicate text-to-video and image-to-video](https://replicate.com/lightricks/ltx-video)
### ComfyUI
To use our model with ComfyUI, please follow the instructions at a dedicated [ComfyUI repo](https://github.com/Lightricks/ComfyUI-LTXVideo/).
### Run locally
#### Installation
The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2.
```bash
git clone https://github.com/Lightricks/LTX-Video.git
cd LTX-Video
# create env
python -m venv env
source env/bin/activate
python -m pip install -e .\[inference-script\]
```
#### Inference
To use our model, please follow the inference code in [inference.py](https://github.com/Lightricks/LTX-Video/blob/main/inference.py):
### Diffusers 🧨
LTX Video is compatible with the [Diffusers Python library](https://huggingface.co/docs/diffusers/main/en/index). It supports both text-to-video and image-to-video generation.
Make sure you install `diffusers` before trying out the examples below.
```bash
pip install -U git+https://github.com/huggingface/diffusers
```
The LTX Video Spatial Upscaler is used via the `LTXLatentUpsamplePipeline` in the `diffusers` library. It is intended to be part of a multi-stage generation process.
Below is an example demonstrating how to use the spatial upsampler with a base LTX Video model (either the 'dev' or 'distilled' version).
```py
import torch
from diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline
from diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition
from diffusers.utils import export_to_video, load_image
# Choose your base LTX Video model:
# base_model_id = "Lightricks/LTX-Video-0.9.7-dev"
base_model_id = "Lightricks/LTX-Video-0.9.7-distilled" # Using distilled for this example
# 0. Load base model and upsampler
pipe = LTXConditionPipeline.from_pretrained(base_model_id, torch_dtype=torch.bfloat16)
pipe_upsample = LTXLatentUpsamplePipeline.from_pretrained(
"Lightricks/ltxv-spatial-upscaler-0.9.7",
vae=pipe.vae,
torch_dtype=torch.bfloat16
)
pipe.to("cuda")
pipe_upsample.to("cuda")
def round_to_nearest_resolution_acceptable_by_vae(height, width):
height = height - (height % pipe.vae_temporal_compression_ratio)
width = width - (width % pipe.vae_temporal_compression_ratio)
return height, width
video = load_video(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cosmos/cosmos-video2world-input-vid.mp4"
)[:21] # Use only the first 21 frames as conditioning
condition1 = LTXVideoCondition(video=video, frame_index=0)
prompt = "The video depicts a winding mountain road covered in snow, with a single vehicle traveling along it. The road is flanked by steep, rocky cliffs and sparse vegetation. The landscape is characterized by rugged terrain and a river visible in the distance. The scene captures the solitude and beauty of a winter drive through a mountainous region."
negative_prompt = "worst quality, inconsistent motion, blurry, jittery, distorted"
expected_height, expected_width = 768, 1152
downscale_factor = 2 / 3
num_frames = 161
# Part 1. Generate video at smaller resolution
downscaled_height, downscaled_width = int(expected_height * downscale_factor), int(expected_width * downscale_factor)
downscaled_height, downscaled_width = round_to_nearest_resolution_acceptable_by_vae(downscaled_height, downscaled_width)
latents = pipe(
conditions=[condition1],
prompt=prompt,
negative_prompt=negative_prompt,
width=downscaled_width,
height=downscaled_height,
num_frames=num_frames,
num_inference_steps=30,
generator=torch.Generator().manual_seed(0),
output_type="latent",
).frames
# Part 2. Upscale generated video using latent upsampler with fewer inference steps
# The available latent upsampler upscales the height/width by 2x
upscaled_height, upscaled_width = downscaled_height * 2, downscaled_width * 2
upscaled_latents = pipe_upsample(
latents=latents,
output_type="latent"
).frames
# Part 3. Denoise the upscaled video with few steps to improve texture (optional, but recommended)
video = pipe(
conditions=[condition1],
prompt=prompt,
negative_prompt=negative_prompt,
width=upscaled_width,
height=upscaled_height,
num_frames=num_frames,
denoise_strength=0.4, # Effectively, 4 inference steps out of 10
num_inference_steps=10,
latents=upscaled_latents,
decode_timestep=0.05,
image_cond_noise_scale=0.025,
generator=torch.Generator().manual_seed(0),
output_type="pil",
).frames[0]
# Part 4. Downscale the video to the expected resolution
video = [frame.resize((expected_width, expected_height)) for frame in video]
export_to_video(video, "output.mp4", fps=24)
```
for more details and inference examples using 🧨 diffusers, check out the [diffusers documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video)
Diffusers also supports directly loading from the original LTX checkpoints using the `from_single_file()` method. Check out [this section](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video#loading-single-files) to learn more.
To learn more, check out the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).
## Limitations
- This model is not intended or able to provide factual information.
- As a statistical model this checkpoint might amplify existing societal biases.
- The model may fail to generate videos that matches the prompts perfectly.
- Prompt following is heavily influenced by the prompting-style. |
New-tutorial-Zarnab-Shastri-Viral-Video/FULL.VIDEO.LINK.Bella.Zarnab.Shastri.Viral.Video.Leaks.Official | New-tutorial-Zarnab-Shastri-Viral-Video | 2025-05-26T05:18:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-26T05:18:26Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
g-assismoraes/gemma-3-1b-it-agnews | g-assismoraes | 2025-05-26T05:16:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T02:33:58Z | ---
library_name: transformers
license: gemma
base_model: google/gemma-3-1b-it
tags:
- generated_from_trainer
model-index:
- name: gemma-3-1b-it-agnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-3-1b-it-agnews
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1073 | 1.0 | 27000 | 1.1091 |
| 1.0571 | 2.0 | 54000 | 1.1085 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
HajimeOgawa/gemma3-7b-mbti-chat-energy | HajimeOgawa | 2025-05-26T05:14:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T05:09:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Linslab/VLA-OS | Linslab | 2025-05-26T05:09:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-20T05:12:01Z | Found. Redirecting to https://cdn-lfs-us-1.hf.co/repos/fe/56/fe56c36eeeb5f04d9eff66e104f774de9a08ea487e8dbc4e43027abc76afb994/11acded461f42ddd393e4a1225e93176131fd87902cf02418c050e612b3e4573?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1748244527&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc0ODI0NDUyN319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2ZlLzU2L2ZlNTZjMzZlZWViNWYwNGQ5ZWZmNjZlMTA0Zjc3NGRlOWEwOGVhNDg3ZThkYmM0ZTQzMDI3YWJjNzZhZmI5OTQvMTFhY2RlZDQ2MWY0MmRkZDM5M2U0YTEyMjVlOTMxNzYxMzFmZDg3OTAyY2YwMjQxOGMwNTBlNjEyYjNlNDU3Mz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=O6YQB9kjhBkGmnqdrFL%7ENrcOYHTSMZbhCD8VRtlYlYZ4TqquKmRq-XAMzfugCEnGqzPidqksKX-C3D3Jr3wec3Zp-ay%7Ev-h--DavYwOCmuCF-9ertu5eCjxLVvsTiU8DmBBaIjcWkEO52XVzwhcLJ%7E2bjsa-kJCP7Wi6grlSqLrJ45qzoxjFrsDo1nyszV77kBpOucU9uDsOQU%7EkdH8o%7E0pHUVD-6dRyL5N51Kix--MOJ46x8T8SI%7EkiW%7EFtw7w2zdTuaTle01J4dc%7E1KziPkpGxT-NDBPW%7EOkLt7gmAQD0qVFFuwrcosoJX66vRZUBObbDwPf2gMzqOOTLUhJxg-w__&Key-Pair-Id=K24J24Z295AEI9 |
SaoSamarth/openai-whisper-large-v2-Khmer-update-1 | SaoSamarth | 2025-05-26T05:08:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T05:08:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
niklasm222/qwen2.5-3b-1.75k-prolog-sp-struct-rwd1-new-absurd-sweep-4 | niklasm222 | 2025-05-26T05:05:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-26T05:04:49Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** niklasm222
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
papaymaguire/qandasdg-experiment-lora | papaymaguire | 2025-05-26T05:01:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-26T05:00:36Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: transformers
model_name: qandasdg-experiment-lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qandasdg-experiment-lora
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="papaymaguire/qandasdg-experiment-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.1
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
srosalesr/HF_practical_distilbert-base-uncased | srosalesr | 2025-05-26T05:01:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-25T22:18:34Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: HF_practical_distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HF_practical_distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3241 | 1.0 | 7 | 0.0377 | 1.0 |
| 0.0174 | 2.0 | 14 | 0.0050 | 1.0 |
| 0.0035 | 3.0 | 21 | 0.0018 | 1.0 |
| 0.0015 | 4.0 | 28 | 0.0011 | 1.0 |
| 0.001 | 5.0 | 35 | 0.0008 | 1.0 |
| 0.0008 | 6.0 | 42 | 0.0006 | 1.0 |
| 0.0007 | 7.0 | 49 | 0.0006 | 1.0 |
| 0.0006 | 8.0 | 56 | 0.0005 | 1.0 |
| 0.0007 | 9.0 | 63 | 0.0005 | 1.0 |
| 0.0005 | 10.0 | 70 | 0.0005 | 1.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kyu5787/exaone-2.4b-mlx | kyu5787 | 2025-05-26T04:58:48Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"exaone",
"lg-ai",
"exaone-3.5",
"text-generation",
"conversational",
"custom_code",
"en",
"ko",
"base_model:LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct",
"base_model:finetune:LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct",
"license:other",
"region:us"
] | text-generation | 2025-05-26T04:55:44Z | ---
license: other
license_name: exaone
license_link: LICENSE
language:
- en
- ko
tags:
- lg-ai
- exaone
- exaone-3.5
- mlx
pipeline_tag: text-generation
library_name: mlx
base_model: LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct
---
# kyu5787/exaone-2.4b-mlx
This model [kyu5787/exaone-2.4b-mlx](https://huggingface.co/kyu5787/exaone-2.4b-mlx) was
converted to MLX format from [LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct)
using mlx-lm version **0.24.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("kyu5787/exaone-2.4b-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
BishakhaBiswas/custom-generate-demo | BishakhaBiswas | 2025-05-26T04:57:24Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-26T04:57:24Z | ---
license: apache-2.0
---
|
Subsets and Splits