modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 00:43:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 00:40:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mlabonne/EvolCodeLlama-7b | mlabonne | 2025-05-27T09:04:28Z | 56 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dataset:mlabonne/Evol-Instruct-Python-1k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-25T12:36:31Z | ---
license: apache-2.0
datasets:
- mlabonne/Evol-Instruct-Python-1k
pipeline_tag: text-generation
---
# 🦙💻 EvolCodeLlama-7b
📝 [Article](https://medium.com/@mlabonne/a-beginners-guide-to-llm-fine-tuning-4bae7d4da672)
<center><img src="https://i.imgur.com/5m7OJQU.png" width="300"></center>
This is a [`codellama/CodeLlama-7b-hf`](https://huggingface.co/codellama/CodeLlama-7b-hf) model fine-tuned using QLoRA (4-bit precision) on the [`mlabonne/Evol-Instruct-Python-1k`](https://huggingface.co/datasets/mlabonne/Evol-Instruct-Python-1k).
## 🔧 Training
It was trained on an RTX 3090 in 1h 11m 44s with the following configuration file:
```yaml
base_model: codellama/CodeLlama-7b-hf
base_model_config: codellama/CodeLlama-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
hub_model_id: EvolCodeLlama-7b
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: mlabonne/Evol-Instruct-Python-1k
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.02
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 2048
sample_packing: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: axolotl
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 10
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
eval_steps: 0.01
save_strategy: epoch
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
Here are the loss curves:

It is mainly designed for educational purposes, not for inference.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## 💻 Usage
``` python
# pip install transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/EvolCodeLlama-7b"
prompt = "Your prompt"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
f'{prompt}',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
``` |
samcomber/ppo-pyramid-target | samcomber | 2025-05-27T09:03:37Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2025-05-27T09:03:34Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: samcomber/ppo-pyramid-target
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NaykinYT/SFT-qwen-merged-data-mini-1 | NaykinYT | 2025-05-27T09:03:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T09:01:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mesolitica/Malaysian-gemma-3-1b-it | mesolitica | 2025-05-27T09:01:37Z | 7 | 0 | null | [
"safetensors",
"gemma3_text",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-05-03T12:25:50Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian gemma-3-1b-it
Continue finetuning https://huggingface.co/google/gemma-3-1b-it on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-gemma3-1b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/gemma3
## Benchmark
### MalayMMLU
Based on 0-shot first token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-gemma-3-1b-it 48.096603 0shot True STEM
1 Malaysian-gemma-3-1b-it 47.423664 0shot True Language
2 Malaysian-gemma-3-1b-it 47.210176 0shot True Social science
3 Malaysian-gemma-3-1b-it 47.709283 0shot True Others
4 Malaysian-gemma-3-1b-it 51.786121 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-gemma-3-1b-it
Metric : first
Shot : 0shot
average accuracy 48.27158964192789
accuracy for STEM 48.09660253786328
accuracy for Language 47.4236641221374
accuracy for Social science 47.21017635154669
accuracy for Others 47.70928280163108
accuracy for Humanities 51.786120591581344
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
mesolitica/Malaysian-gemma-3-27b-it | mesolitica | 2025-05-27T09:00:47Z | 11 | 0 | null | [
"safetensors",
"gemma3_text",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-04-27T01:53:09Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian gemma-3-27b-it
Continue finetuning https://huggingface.co/google/gemma-3-27b-it on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-gemma3-27b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/gemma3
## Benchmark
### MalayMMLU
Based on 0-shot exact first token match using vLLM,
```
Model Accuracy shot category
0 Malaysian-gemma-3-27b-it 72.697503 0 STEM
1 Malaysian-gemma-3-27b-it 76.781170 0 Language
2 Malaysian-gemma-3-27b-it 68.227812 0 Social science
3 Malaysian-gemma-3-27b-it 68.385704 0 Others
4 Malaysian-gemma-3-27b-it 71.535836 0 Humanities
Model : Malaysian-gemma-3-27b-it
Metric : full
Shot : 0
average accuracy 71.52769173584439
accuracy for STEM 72.6975030699959
accuracy for Language 76.78117048346056
accuracy for Social science 68.22781150621567
accuracy for Others 68.38570400575678
accuracy for Humanities 71.5358361774744
```
**Currently the original model not able to use guided decoding in vLLM**.
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
qxakshat/all-MiniLM-L6-v2-32dim | qxakshat | 2025-05-27T08:59:16Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-27T08:43:13Z | ---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 32 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Model performance (Cosine-Similarity based, on the sts-test dataset):
- Original (384 dimensions): Pearson: 0.8274 Spearman: 0.8203
- 128 dimensions: Pearson: 0.8165 Spearman: 0.8180
- 64 dimensions: Pearson: 0.7855 Spearman: 0.7973
- 32 dimensions: Pearson: 0.7256 Spearman: 0.7481
created using: [dimensionality_reduction](https://github.com/UKPLab/sentence-transformers/blob/master/examples/sentence_transformer/training/distillation/dimensionality_reduction.py) |
BSC-LT/ALIA-40b | BSC-LT | 2025-05-27T08:58:42Z | 385 | 76 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"bg",
"ca",
"code",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"eu",
"fi",
"fr",
"ga",
"gl",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"nn",
"oc",
"pl",
"pt",
"ro",
"ru",
"sh",
"sk",
"sl",
"sr",
"sv",
"uk",
"dataset:oscar-corpus/colossal-oscar-1.0",
"dataset:HuggingFaceFW/fineweb-edu",
"dataset:joelniklaus/eurlex_resources",
"dataset:joelniklaus/legal-mc4",
"dataset:projecte-aina/CATalog",
"dataset:UFRGS/brwac",
"dataset:community-datasets/hrwac",
"dataset:danish-foundation-models/danish-gigaword",
"dataset:HiTZ/euscrawl",
"dataset:PleIAs/French-PD-Newspapers",
"dataset:PleIAs/French-PD-Books",
"dataset:AI-team-UoA/greek_legal_code",
"dataset:HiTZ/latxa-corpus-v1.1",
"dataset:allenai/peS2o",
"dataset:pile-of-law/pile-of-law",
"dataset:PORTULAN/parlamento-pt",
"dataset:hoskinson-center/proof-pile",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/starcoderdata",
"dataset:bjoernp/tagesschau-2018-2023",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2403.14009",
"arxiv:2403.20266",
"arxiv:2101.00027",
"arxiv:2207.00220",
"arxiv:1810.06694",
"arxiv:1911.05507",
"arxiv:1906.03741",
"arxiv:2406.17557",
"arxiv:2402.06619",
"arxiv:1803.09010",
"arxiv:2502.08489",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:eu"
]
| text-generation | 2024-12-09T14:04:29Z | ---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
language:
- bg
- ca
- code
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ga
- gl
- hr
- hu
- it
- lt
- lv
- mt
- nl
- nn
- \no
- oc
- pl
- pt
- ro
- ru
- sh
- sk
- sl
- sr
- sv
- uk
datasets:
- oscar-corpus/colossal-oscar-1.0
- HuggingFaceFW/fineweb-edu
- joelniklaus/eurlex_resources
- joelniklaus/legal-mc4
- projecte-aina/CATalog
- UFRGS/brwac
- community-datasets/hrwac
- danish-foundation-models/danish-gigaword
- HiTZ/euscrawl
- PleIAs/French-PD-Newspapers
- PleIAs/French-PD-Books
- AI-team-UoA/greek_legal_code
- HiTZ/latxa-corpus-v1.1
- allenai/peS2o
- pile-of-law/pile-of-law
- PORTULAN/parlamento-pt
- hoskinson-center/proof-pile
- togethercomputer/RedPajama-Data-1T
- bigcode/starcoderdata
- bjoernp/tagesschau-2018-2023
- EleutherAI/the_pile_deduplicated
---

> [!WARNING]
> **WARNING:** This is a base language model that has not undergone instruction tuning or alignment with human preferences. As a result, it may generate outputs that are inappropriate, misleading, biased, or unsafe. These risks can be mitigated through additional post-training stages, which is strongly recommended before deployment in any production system, especially for high-stakes applications.
# ALIA-40b Model Card
ALIA-40b is a highly multilingual model pre-trained from scratch that will come with its respective base and instruction-tuned variants. This model card corresponds to the 40b base version.
To visit the model cards of other model versions, please refer to the [Model Index](#model-index).
This model is released under a permissive [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0).
Along with the open weights, all training scripts and configuration files are made publicly available in [this GitHub repository](https://github.com/langtech-bsc/alia).
---
## Model Details
### Description
Transformer-based decoder-only language model that has been pre-trained from scratch on 9.37 trillion tokens of highly curated data.
The pre-training corpus contains text in 35 European languages and code.
### Hyperparameters
The full list of hyperparameters can be found [here](https://github.com/langtech-bsc/alia/blob/main/configs).
### Architecture
| | |
|-------------------------|:--------------|
| Total Parameters | 40,433,885,184|
| Embedding Parameters | 2,097,152,000 |
| Layers | 48 |
| Hidden size | 8,192 |
| Attention heads | 64 |
| Context length | 32,768 |
| Vocabulary size | 256,000 |
| Precision | bfloat16 |
| Embedding type | RoPE |
| Activation Function | SwiGLU |
| Layer normalization | RMS Norm |
| Flash attention | ✅ |
| Grouped Query Attention | ✅ |
| Num. query groups | 8 |
---
## Intended Use
### Direct Use
The models are intended for both research and commercial use in any of the languages included in the training data.
The base models are intended either for language generation or to be further fine-tuned for specific use-cases.
The instruction-tuned variants can be used as general-purpose assistants, as long as the user is fully aware of the model’s limitations.
### Out-of-scope Use
The model is not intended for malicious activities, such as harming others or violating human rights.
Any downstream application must comply with current laws and regulations.
Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged.
---
## Hardware and Software
### Training Framework
Pre-training was conducted using NVIDIA’s [NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html),
which leverages PyTorch Lightning for efficient model training in highly distributed settings.
The instruction-tuned versions were produced with [FastChat](https://github.com/lm-sys/FastChat).
### Compute Infrastructure
All models were trained on [MareNostrum 5](https://www.bsc.es/ca/marenostrum/marenostrum-5), a pre-exascale EuroHPC supercomputer hosted and
operated by Barcelona Supercomputing Center.
The accelerated partition is composed of 1,120 nodes with the following specifications:
- 4x Nvidia Hopper GPUs with 64GB HBM2 memory
- 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
- 4x NDR200 (BW per node 800Gb/s)
- 512 GB of Main memory (DDR5)
- 460GB on NVMe storage
|Model|Nodes|GPUs|
|:---:|:---:|:---:|
|2B|64|256|
|7B|128|512|
|40B|256 / 512|1,024 / 2,048|
---
## How to use
This section offers examples of how to perform inference using various methods.
### Inference
You'll find different techniques for running inference, including Huggingface's Text Generation Pipeline, multi-GPU configurations, and vLLM for scalable and efficient generation.
#### Inference with Huggingface's Text Generation Pipeline
The Huggingface Text Generation Pipeline provides a straightforward way to run inference using the ALIA-40b model.
```bash
pip install transformers torch accelerate sentencepiece protobuf
```
<details>
<summary>Show code</summary>
```python
from transformers import pipeline, set_seed
model_id = "BSC-LT/ALIA-40b"
# Sample prompts
prompts = [
"Las fiestas de San Isidro Labrador de Yecla son",
"El punt més alt del Parc Natural del Montseny és",
"Sentence in English: The typical chance of such a storm is around 10%. Sentence in Catalan:",
"Si le monde était clair",
"The future of AI is",
]
# Create the pipeline
generator = pipeline("text-generation", model_id, device_map="auto")
generation_args = {
"temperature": 0.1,
"top_p": 0.95,
"max_new_tokens": 25,
"repetition_penalty": 1.2,
"do_sample": True
}
# Fix the seed
set_seed(1)
# Generate texts
outputs = generator(prompts, **generation_args)
# Print outputs
for output in outputs:
print(output[0]["generated_text"])
```
</details>
#### Inference with single / multi GPU
This section provides a simple example of how to run inference using Huggingface's AutoModel class.
```bash
pip install transformers torch accelerate sentencepiece protobuf
```
<details>
<summary>Show code</summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "BSC-LT/ALIA-40b"
# Input text
text = "El mercat del barri és"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load the model
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16
)
generation_args = {
"temperature": 0.1,
"top_p": 0.95,
"max_new_tokens": 25,
"repetition_penalty": 1.2,
"do_sample": True
}
inputs = tokenizer(text, return_tensors="pt")
# Generate texts
output = model.generate(input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"], **generation_args)
# Print outputs
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
</details>
#### Inference with vLLM
vLLM is an efficient library for inference that enables faster and more scalable text generation.
```bash
pip install vllm
```
<details>
<summary>Show code</summary>
```python
from vllm import LLM, SamplingParams
model_id = "BSC-LT/ALIA-40b"
# Sample prompts
prompts = [
"Las fiestas de San Isidro Labrador de Yecla son",
"El punt més alt del Parc Natural del Montseny és",
"Sentence in English: The typical chance of such a storm is around 10%. Sentence in Catalan:",
"Si le monde était clair",
"The future of AI is",
]
# Create a sampling params object
sampling_params = SamplingParams(
temperature=0.1,
top_p=0.95,
seed=1,
max_tokens=25,
repetition_penalty=1.2)
# Create an LLM
llm = LLM(model=model_id, tensor_parallel_size=4)
# Generate texts
outputs = llm.generate(prompts, sampling_params)
# Print outputs
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
</details>
---
## Data
### Pretraining Data
The pre-training corpus comprises data from 35 European languages and 92 programming languages, with detailed data sources provided below.
The initial 1.6 training epochs used 2.4 trillion tokens, obtained by manually adjusting data proportion to balance the representation
and give more importance to Spain’s co-official languages (Spanish, Catalan, Galician, and Basque). This way, we downsampled code and English data to half,
Spanish co-official languages were oversampled by 2x, and the remaining languages were kept in their original proportions.
During the following training, the Colossal OSCAR dataset was replaced with the FineWeb-Edu dataset.
This adjustment resulted in a total of 2.68 trillion tokens used across 2 epochs, distributed as outlined below:

The pretraining corpus is predominantly composed of data from Colossal OSCAR, which contributes a significant 53.05% of the total tokens.
Following this, Starcoder provides 13.67%, and FineWeb-Edu (350B tokens subset) adds 10.24%. The next largest sources are HPLT at 4.21% and French-PD at 3.59%.
Other notable contributions include MaCoCu, Legal-ES, and EurLex, each contributing around 1.72% to 1.41%.
These major sources collectively form the bulk of the corpus, ensuring a rich and diverse dataset for training the language model.
The remaining 10% comes from smaller sources in various languages.
Feel free to click the expand button below to see the full list of sources.
<details>
<summary>Data Sources</summary>
| Dataset | Language | Source |
|---|---|---|
| Colossal OSCAR 1.0 | bg, ca, cs, cy, da, de, el, en, es, et, eu, fi, fr, ga, gl, hr, hu, it, lt, lv, mt, nl, nn, no, oc, pl, pt, ro, ru, sh, sk, sl, sr, sv, uk | Brack et al., 2024 |
| Aya Dataset (w/o Evaluation Suite) | eu, hr, nl, fi, ka, hu, lt, nn, ro, sk, lv, cy, bg, cs, en, fr, de, ga, mt, pl, ru, sl, sv, ca, da, et, gl, el, it, no, pt, sr, es, uk | Singh et al., 2024 |
| Wikimedia dumps | bg, ca, cs, da, de, el, en, es, et, eu, fi, fr, ga, gl, hr, hu, it, lt, lv, mt, nl, nn, no, pl, pt, ro, sh, sk, sl, sr, uk | [Link](https://dumps.wikimedia.org/) |
| OpenSubtitles v2016 | bg, ca, cs, da, de, el, en, es, et, eu, fi, fr, gl, hr, it, lt, lv, nl, no, pl, pt, ro, sk, sl, sr, sv, uk | Lison & Tiedemann, 2016 |
| EurLEX-Resources | bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv | [Link](https://huggingface.co/datasets/joelniklaus/eurlex_resources) |
| MC4-Legal | bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv | [Link](https://huggingface.co/datasets/joelito/legal-mc4) |
| Parlamint | at, bg, cz, dk, ee, es, es-ga, fi, fr, gb, gr, hr, hu, it, lv, nl, no, pl, pt, rs, se, si | Erjavec et al., 2021 |
| MaCoCu | bg, ca, el, hr, mt, sl, sr, uk | Bañón et al., 2022 |
| CURLICAT | bg, hr, hu, pl, ro, sk, sl | Váradi et al., 2022 |
| Norwegian Colossal Corpus (NCC) | nn, no | Kummervold et al., 2021 |
| Academic Slovene KAS 2.0 | sl | Žagar et al., 2022 |
| BIGPATENT | en | Sharma et al., 2019 |
| Biomedical-ES | es | Internally generated biomedical dataset: Wikipedia LS, Pubmed, MeSpEn, patents, clinical cases, medical crawler |
| Brazilian Portuguese Web as Corpus (BrWaC) | pt | Wagner Filho et al., 2018 |
| Bulgarian National Corpus (BulNC) | bg | [Link](http://old.dcl.bas.bg/dataset/BulNC.7z) |
| CaBeRnet | fr | Popa-Fabre et al., 2020 |
| CATalog 1.0 | ca | Palomar-Giner et al., 2024 |
| CorpusNÓS | gl | de-Dios-Flores et al., 2024 |
| Croatian Web as Corpus 2.1 (hrWaC) | hr | Ljubešić & Klubička, 2014 |
| DaNewsroom | da | Varab & Schluter, 2020 |
| Danish GigaWord | da | Strømberg-Derczynski et al., 2021 |
| Dolmino-mix-1124 (subset without synthetically generated data and privative licenses) | en | Team OLMo, 2024
| DK-CLARIN Reference Corpus of General Danish | da | [Link](https://korpus.dsl.dk/clarin/) |
| Estonian National Corpus 2021 (ENC) | et | Koppel & Kallas, 2022 |
| Estonian Reference Corpus (ERC) | et | [Link](https://www.cl.ut.ee/korpused/segakorpus/) |
| EusCrawl (w/o Wikipedia or NC-licenses) | eu | Artetxe et al., 2022 |
| FineWeb-Edu (350BT subset) | en | Penedo et al., 2024 |
| Fineweb2 (ad hoc subset of 178BT) | ar, as, bg, ca, cs, cy, da, de, el, es, et, eu, fi, fr, ga, gl, hr, hu, it, lt, lv, mt, nl, nn, no, oc, pl, pt, ro, ru, sk, sl, sr, sv, uk | Penedo et al., 2024
| French Public Domain Books (French-PD) | fr | [Link](https://huggingface.co/datasets/PleIAs/French-PD-Books) |
| French Public Domain Newspapers (French-PD) | fr | [Link](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers) |
| German Web as Corpus (DeWaC) | de | [Link](https://docs.sslmit.unibo.it/doku.php?id=corpora:dewac) |
| Greek Legal Code (GLC) | el | Papaloukas et al., 2021 |
| Greek Web Corpus (GWC) | el | Outsios et al., 2018 |
| HPLT v1 - Spanish | es | de Gibert et al., 2024 |
| HPLT v1.1 - Spanish | es | de Gibert et al., 2024 |
| Irish Universal Dependencies (Ga-UD) | ga | [Link](https://universaldependencies.org/ga/index.html) |
| Italian Web as Corpus (ItWaC) | it | [Link](https://docs.sslmit.unibo.it/doku.php?id=corpora:itwac) |
| Korpus Malti | mt | Micallef et al., 2022 |
| Korpus slovenských právnych predpisov v1.9 (SK-Laws) | sk | [Link](https://www.juls.savba.sk/data/marcell/legal-sk-20220322-1.9.ver.xz) |
| Latxa Corpus v1.1 (GAITU) | eu | Etxaniz et al., 2024 [Link](https://huggingface.co/datasets/HiTZ/latxa-corpus-v1.1) |
| Laws and legal acts of Ukraine (UK-Laws) | uk | [Link](https://lang.org.ua/en/corpora/#anchor7) |
| Legal-ES | es | Internally generated legal dataset: BOE, BORME, Senado, Congreso, Spanish court orders, DOGC |
| MARCELL Romanian legislative subcorpus v2 | ro | [Link](https://elrc-share.eu/reposMARCELL%20Romanian%20legislative%20subcorpus%20v2itory/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/) |
| Math AMPS | en | Hendrycks et al., 2021 |
| NKPJ National Corpus of Polish v1.2 (NKPJ) | pl | Lewandowska-Tomaszczyk et al., 2013 |
| Occitan Corpus (IEA-AALO) | oc | Provided by [IEA](https://www.institutestudisaranesi.cat/) |
| Open Legal Data - German court decisions and laws | de | Ostendorff et al., 2020 |
| ParlamentoPT | pt | Rodrigues et al., 2023 |
| peS2o | en | Soldaini & Lo, 2023 |
| PG-19 | en | Rae et al., 2019 |
| Pile of Law (selected subsets) | en | Henderson* et al., 2022 |
| Polish Parliamentary Corpus (PPC) | pl | Ogrodniczuk, 2018 |
| Proof Pile | en | [Link](https://huggingface.co/datasets/hoskinson-center/proof-pile) |
| RedPajama-Data T1 (StackExchange subset) | en | Computer, 2023 |
| Scientific-ES | es | Internally generated scientific dataset: Dialnet, Scielo, CSIC, TDX, BSC, UCM |
| SK Court Decisions v2.0 (OD-Justice) | sk | [Link](https://www.juls.savba.sk/data/od-justice/od-justice-2.0.ver.xz) |
| Slovene Web as Corpus (slWaC) | sl | Erjavec et al., 2015 |
| SoNaR Corpus NC 1.2 | nl | [Link](https://taalmaterialen.ivdnt.org/download/tstc-sonar-corpus/) |
| Spanish Legal Domain Corpora (Spanish-Legal) | es | Gutiérrez-Fandiño et al., 2021 |
| SrpKorSubset: news, legal, academic, conversation, lit- erary (SrpKor) | sr | [Link](http://www.korpus.matf.bg.ac.rs/) |
| Starcoder | code | Li et al., 2023 |
| State-related content from the Latvian Web (State-Latvian-Web) | lv | [Link](https://catalog.elra.info/en-us/repository/browse/ELRA-W0169/) |
| SYN v9: large corpus of written Czech | cs | Křen et al., 2021 |
| Tagesschau Archive Article | de | [Link](https://huggingface.co/datasets/bjoernp/tagesschau-2018-2023) |
| The Danish Parliament Corpus 2009 - 2017, v1 | da | Hansen, 2018 |
| The Gaois bilingual corpus of English-Irish legislation (Ga-Legislation) | ga | [Link](https://portulanclarin.net/repository/browse/the-gaois-bilingual-corpus-of-english-irish-legislation-processed/daeac17c9e3511ea9b7f02420a000407b83de243dc0b469aab41084386c5b80f/) |
| The Pile (PhilPapers) | en | Gao et al., 2021 |
| The Swedish Culturomics Gigaword Corpus (Swedish- Gigaword) | sv | Rødven-Eide, 2016 |
| Welsh-GOV | cy | Crawling from [Link](https://www.llyw.cymru) |
| Yle Finnish News Archive (Yle-News) | fi | [Link](http://urn.fi/urn:nbn:fi:lb-2021050401) |
To consult the data summary document with the respective licences, please send an e-mail to [email protected].
<details>
<summary>References</summary>
- Abadji, J., Suárez, P. J. O., Romary, L., & Sagot, B. (2021). Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus (H. Lüngen, M. Kupietz, P. Bański, A. Barbaresi, S. Clematide, & I. Pisetta, Eds.; pp. 1–9). Leibniz-Institut für Deutsche Sprache. [Link](https://doi.org/10.14618/ids-pub-10468)
- Artetxe, M., Aldabe, I., Agerri, R., Perez-de-Viñaspre, O., & Soroa, A. (2022). Does Corpus Quality Really Matter for Low-Resource Languages?
- Bañón, M., Esplà-Gomis, M., Forcada, M. L., García-Romero, C., Kuzman, T., Ljubešić, N., van Noord, R., Sempere, L. P., Ramírez-Sánchez, G., Rupnik, P., Suchomel, V., Toral, A., van der Werff, T., & Zaragoza, J. (2022). MaCoCu: Massive collection and curation of monolingual and bilingual data: Focus on under-resourced languages. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, 303–304. [Link](https://aclanthology.org/2022.eamt-1.41)
- Brack, M., Ostendorff, M., Suarez, P. O., Saiz, J. J., Castilla, I. L., Palomar-Giner, J., Shvets, A., Schramowski, P., Rehm, G., Villegas, M., & Kersting, K. (2024). Community OSCAR: A Community Effort for Multilingual Web Data. [Link](https://occiglot.eu/papers/Community_Oscar.pdf)
- Computer, T. (2023). RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset [Computer software]. [Link](https://github.com/togethercomputer/RedPajama-Data)
- de Gibert, O., Nail, G., Arefyev, N., Bañón, M., van der Linde, J., Ji, S., Zaragoza-Bernabeu, J., Aulamo, M., Ramírez-Sánchez, G., Kutuzov, A., Pyysalo, S., Oepen, S., & Tiedemann, J. (2024). A New Massive Multilingual Dataset for High-Performance Language Technologies (arXiv:2403.14009). arXiv. [Link](http://arxiv.org/abs/2403.14009)
- Dodge, J., Sap, M., Marasović, A., Agnew, W., Ilharco, G., Groeneveld, D., Mitchell, M., & Gardner, M. (2021). Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus. In M.-F. Moens, X. Huang, L. Specia, & S. W. Yih (Eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 1286–1305). Association for Computational Linguistics. [Link](https://doi.org/10.18653/v1/2021.emnlp-main.98)
- Erjavec, T., Ljubešić, N., & Logar, N. (2015). The slWaC corpus of the Slovene web. Informatica (Slovenia), 39, 35–42.
- Erjavec, T., Ogrodniczuk, M., Osenova, P., Ljubešić, N., Simov, K., Grigorova, V., Rudolf, M., Pančur, A., Kopp, M., Barkarson, S., Steingrímsson, S. hór, van der Pol, H., Depoorter, G., de Does, J., Jongejan, B., Haltrup Hansen, D., Navarretta, C., Calzada Pérez, M., de Macedo, L. D., … Rayson, P. (2021). Linguistically annotated multilingual comparable corpora of parliamentary debates ParlaMint.ana 2.1. [Link](http://hdl.handle.net/11356/1431)
- Etxaniz, J., Sainz, O., Perez, N., Aldabe, I., Rigau, G., Agirre, E., Ormazabal, A., Artetxe, M., & Soroa, A. (2024). Latxa: An Open Language Model and Evaluation Suite for Basque. [Link] (https://arxiv.org/abs/2403.20266)
- Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., Presser, S., & Leahy, C. (2021). The Pile: An 800GB Dataset of Diverse Text for Language Modeling. CoRR, abs/2101.00027. [Link](https://arxiv.org/abs/2101.00027)
- Gutiérrez-Fandiño, A., Armengol-Estapé, J., Gonzalez-Agirre, A., & Villegas, M. (2021). Spanish Legalese Language Model and Corpora.
- Hansen, D. H. (2018). The Danish Parliament Corpus 2009—2017, v1. [Link](http://hdl.handle.net/20.500.12115/8)
- Henderson*, P., Krass*, M. S., Zheng, L., Guha, N., Manning, C. D., Jurafsky, D., & Ho, D. E. (2022). Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset. arXiv. [Link](https://arxiv.org/abs/2207.00220)
- Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., & Steinhardt, J. (2021). Measuring Mathematical Problem Solving With the MATH Dataset. NeurIPS.
- Jansen, T., Tong, Y., Zevallos, V., & Suarez, P. O. (2022). Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data.
- Koppel, K., & Kallas, J. (2022). Eesti keele ühendkorpuste sari 2013–2021: Mahukaim eestikeelsete digitekstide kogu. Eesti Rakenduslingvistika Ühingu Aastaraamat Estonian Papers in Applied Linguistics, 18, 207–228. [Link](https://doi.org/10.5128/erya18.12)
- Křen, M., Cvrček, V., Henyš, J., Hnátková, M., Jelínek, T., Kocek, J., Kováříková, D., Křivan, J., Milička, J., Petkevič, V., Procházka, P., Skoumalová, H., Šindlerová, J., & Škrabal, M. (2021). SYN v9: Large corpus of written Czech. [Link](http://hdl.handle.net/11234/1-4635)
- Kreutzer, J., Caswell, I., Wang, L., Wahab, A., van Esch, D., Ulzii-Orshikh, N., Tapo, A., Subramani, N., Sokolov, A., Sikasote, C., Setyawan, M., Sarin, S., Samb, S., Sagot, B., Rivera, C., Rios, A., Papadimitriou, I., Osei, S., Suarez, P. O., … Adeyemi, M. (2022). Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. Transactions of the Association for Computational Linguistics, 10, 50–72. [Link](https://doi.org/10.1162/tacl_a_00447)
- Kummervold, P. E., De la Rosa, J., Wetjen, F., & Brygfjeld, S. A. (2021). Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model. In S. Dobnik & L. Øvrelid (Eds.), Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa) (pp. 20–29). Linköping University Electronic Press, Sweden. [Link](https://aclanthology.org/2021.nodalida-main.3)
- Lewandowska-Tomaszczyk, B., Górski, R., Łaziński, M., & Przepiórkowski, A. (2013). The National Corpus of Polish (NKJP). Language use and data analysis. 309–319.
- Li, R., Allal, L. B., Zi, Y., Muennighoff, N., Kocetkov, D., Mou, C., Marone, M., Akiki, C., Li, J., Chim, J., Liu, Q., Zheltonozhskii, E., Zhuo, T. Y., Wang, T., Dehaene, O., Davaadorj, M., Lamy-Poirier, J., Monteiro, J., Shliazhko, O., … Vries, H. de. (2023). StarCoder: May the source be with you!
- Lison, P., & Tiedemann, J. (2016). OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16) (pp. 923–929). European Language Resources Association (ELRA). [Link](https://aclanthology.org/L16-1147)
- Ljubešić, N., & Klubička, F. (2014). Bs,hr,srWaC - Web Corpora of Bosnian, Croatian and Serbian. In F. Bildhauer & R. Schäfer (Eds.), Proceedings of the 9th Web as Corpus Workshop (WaC-9) (pp. 29–35). Association for Computational Linguistics. [Link](https://doi.org/10.3115/v1/W14-0405)
- Micallef, K., Gatt, A., Tanti, M., van der Plas, L., & Borg, C. (2022). Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese. Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, 90–101. [Link](https://doi.org/10.18653/v1/2022.deeplo-1.10)
- Ogrodniczuk, M. (2018). Polish Parliamentary Corpus. [Link](https://api.semanticscholar.org/CorpusID:235134113)
- Ostendorff, M., Blume, T., & Ostendorff, S. (2020). Towards an Open Platform for Legal Information. Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, 385–388. [Link](https://doi.org/10.1145/3383583.3398616)
- Ostendorff, M., Suarez, P. O., Lage, L. F., & Rehm, G. (2024). LLM-Datasets: An Open Framework for Pretraining Datasets of Large Language Models. First Conference on Language Modeling. [Link](https://openreview.net/forum?id=5RdIMlGLXL)
- Outsios, S., Skianis, K., Meladianos, P., Xypolopoulos, C., & Vazirgiannis, M. (2018). Word Embeddings from Large-Scale Greek Web content. arXiv Preprint arXiv:1810.06694.
- Palomar-Giner, J., Saiz, J. J., Espuña, F., Mina, M., Da Dalt, S., Llop, J., Ostendorff, M., Ortiz Suarez, P., Rehm, G., Gonzalez-Agirre, A., & Villegas, M. (2024). A CURATEd CATalog: Rethinking the Extraction of Pretraining Corpora for Mid-Resourced Languages. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 335–349). ELRA and ICCL. [Link](https://aclanthology.org/2024.lrec-main.31)
- Papaloukas, C., Chalkidis, I., Athinaios, K., Pantazi, D.-A., & Koubarakis, M. (2021). Multi-granular Legal Topic Classification on Greek Legislation. Proceedings of the Natural Legal Language Processing Workshop 2021, 63–75. [Link](https://doi.org/10.48550/arXiv.2109.15298)
- Popa-Fabre, M., Ortiz Suárez, P. J., Sagot, B., & de la Clergerie, É. (2020). French Contextualized Word-Embeddings with a sip of CaBeRnet: A New French Balanced Reference Corpus. Proceedings of the 8th Workshop on Challenges in the Management of Large Corpora, 15–23. [Link](https://aclanthology.org/2020.cmlc-1.3)
- Rae, J. W., Potapenko, A., Jayakumar, S. M., Hillier, C., & Lillicrap, T. P. (2019). Compressive Transformers for Long-Range Sequence Modelling. arXiv Preprint. [Link](https://arxiv.org/abs/1911.05507)
- Rodrigues, J., Gomes, L., Silva, J., Branco, A., Santos, R., Cardoso, H. L., & Osório, T. (2023). Advancing Neural Encoding of Portuguese with Transformer Albertina PT-\*.
- Rødven-Eide, S. (2016). The Swedish Culturomics Gigaword CorpusThe Swedish Culturomics Gigaword Corpus [Dataset]. Språkbanken Text. [Link](https://doi.org/10.23695/3WMV-1Z09)
- Sharma, E., Li, C., & Wang, L. (2019). BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization. CoRR, abs/1906.03741. [Link](http://arxiv.org/abs/1906.03741)
- Soldaini, L., & Lo, K. (2023). peS2o (Pretraining Efficiently on S2ORC) Dataset. Allen Institute for AI.
- Strømberg-Derczynski, L., Ciosici, M., Baglini, R., Christiansen, M. H., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Madsen, J., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2021). The Danish Gigaword Corpus. Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), 413–421. [Link](https://aclanthology.org/2021.nodalida-main.46)
- Subramani, N., Luccioni, S., Dodge, J., & Mitchell, M. (2023). Detecting Personal Information in Training Corpora: An Analysis. 208–220. [Link](https://doi.org/10.18653/v1/2023.trustnlp-1.18)
- Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V. Miranda, Jacob Morrison, Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, Hannaneh Hajishirzi
- Varab, D., & Schluter, N. (2020). DaNewsroom: A Large-scale Danish Summarisation Dataset. Proceedings of The 12th Language Resources and Evaluation Conference, 6731–6739. [Link](https://www.aclweb.org/anthology/2020.lrec-1.831)
- Váradi, T., Nyéki, B., Koeva, S., Tadić, M., Štefanec, V., Ogrodniczuk, M., Nitoń, B., Pezik, P., Barbu Mititelu, V., Irimia, E., Mitrofan, M., Tufi\textcommabelows, D., Garabík, R., Krek, S., & Repar, A. (2022). Introducing the CURLICAT Corpora: Seven-language Domain Specific Annotated Corpora from Curated Sources. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Thirteenth Language Resources and Evaluation Conference (pp. 100–108). European Language Resources Association. [Link](https://aclanthology.org/2022.lrec-1.11)
- Wagner Filho, J. A., Wilkens, R., Idiart, M., & Villavicencio, A. (2018). The brwac corpus: A new open resource for brazilian portuguese. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
- Žagar, A., Kavaš, M., Robnik-Šikonja, M., Erjavec, T., Fišer, D., Ljubešić, N., Ferme, M., Borovič, M., Boškovič, B., Ojsteršek, M., & Hrovat, G. (2022). Corpus of academic Slovene KAS 2.0. [Link](http://hdl.handle.net/11356/1448)
- Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. BBQ: A hand-built bias benchmark for question answering. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2086–2105, Dublin, Ireland. Association for Computational Linguistics.
- Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407–3412, Hong Kong, China. Association for Computational Linguistics.
- Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., & Tafjord, O. (2018). Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge. arXiv:1803. 05457v1.
- Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
- Penedo, G., Kydlíček, H., allal, L. B., Lozhkov, A., Mitchell, M., Raffel, C., Von Werra, L., & Wolf, T. (2024). The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale (arXiv:2406.17557). arXiv. http://arxiv.org/abs/2406.17557
- Singh, S., Vargus, F., Dsouza, D., Karlsson, B. F., Mahendiran, A., Ko, W.-Y., Shandilya, H., Patel, J., Mataciunas, D., OMahony, L., Zhang, M., Hettiarachchi, R., Wilson, J., Machado, M., Moura, L. S., Krzemiński, D., Fadaei, H., Ergün, I., Okoh, I., … Hooker, S. (2024). Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning (arXiv:2402.06619). arXiv. http://arxiv.org/abs/2402.06619
</details>
</details>
In the final pre-training phase, we used a high-quality subset of 160 billion tokens. Additionally, to expand the model's context window to 32K, 6.3 billion tokens were processed using the Llama 3.1 RoPE interpolation strategy.
We provide an extense Datasheet section following the best practices defined by [(Gebru et al., 2021)](https://arxiv.org/pdf/1803.09010).
<details>
<summary>Datasheet</summary>
#### Motivation
**For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.**
The purpose of creating this dataset is to pre-train the Salamandra family of multilingual models with high performance in a large number of European languages (35)
and programming languages (92). We also want to represent the co-official languages of Spain: Spanish, Catalan, Galician and Basque. For this reason, we oversample
these languages by a factor of 2.
There is a great lack of massive multilingual data, especially in minority languages (Ostendorff & Rehm, 2023), so part of our efforts in the creation of
this pre-training dataset have resulted in the contribution to large projects such as the Community OSCAR (Brack et al., 2024), which includes 151 languages
and 40T words, or CATalog (Palomar-Giner et al., 2024), the largest open dataset in Catalan in the world.
**Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?**
The dataset has been created by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center - Centro Nacional de Supercomputación (BSC-CNS),
which aims to advance the field of natural language processing through cutting-edge research and development and the use of HPC. In particular, it was created by
the unit's data team, the main contributors being José Javier Saiz, Ferran Espuña and Jorge Palomar.
However, the creation of the dataset would not have been possible without the collaboration of a large number of collaborators, partners and public institutions,
which can be found in detail in the acknowledgements.
**Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.**
This work has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/).
This work is funded by the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU
within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337.
#### Composition
**What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.**
The dataset consists entirely of text documents in various languages. Specifically, data was mainly sourced from the following databases and
repositories:
- **Common Crawl:** Repository that holds website data and is run by the Common Crawl non-profit organization. It is updated monthly and is
distributed under the CC0 1.0 public domain license.
- **GitHub:** Community platform that allows developers to create, store, manage, and share their code. Repositories are crawled and then
distributed with their original licenses, which may vary from permissive to non-commercial licenses.
- **Wikimedia:** Database that holds the collection databases managed by the Wikimedia Foundation, including Wikipedia, Wikibooks, Wikinews,
Wikiquote, Wikisource, and Wikivoyage. It is updated monthly and is distributed under Creative Commons Attribution-ShareAlike License 4.0.
- **EurLex:** Repository that holds the collection of legal documents from the European Union, available in all of the EU’s 24 official
languages and run by the Publications Office of the European Union. It is updated daily and is distributed under the Creative Commons
Attribution 4.0 International license.
- **Other repositories:** Specific repositories were crawled under permission for domain-specific corpora, which include academic, legal,
and newspaper repositories.
We provide a complete list of dataset sources at the end of this section.
**How many instances are there in total (of each type, if appropriate)?**
The dataset contains a diverse range of instances across multiple languages, with notable adjustments for certain languages. English
represents the largest portion, accounting for 39.31% of the total data. Spanish was upsampled by a factor of 2, bringing its share to 16.12%,
while Catalan (1.97%), Basque (0.24%), and Galician (0.31%) were also upsampled by 2. On the other hand, code-related data was downsampled
by half, making up 5.78% of the total. Other prominent languages include French (6.6%), Russian (5.56%), German (4.79%), and Hungarian
(4.59%), with several additional languages contributing between 1% and 2%, and smaller portions represented by a variety of others.
**Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).**
The dataset is a sample from multiple sources, with different weights based on the primary language of the content: Spanish, Catalan,
Basque, and Galician content was upsampled by a factor of two, while programming languages were downsampled by a factor of half. Other
sources were sampled in proportion to their occurrence.
**What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description.**
Each instance consists of a text document processed for deduplication, language identification, and source-specific filtering. Some documents required
optical character recognition (OCR) to extract text from non-text formats such as PDFs.
**Is there a label or target associated with each instance? If so, please provide a description.**
Each instance is labelled with a unique identifier, the primary language of the content, and the URL for web-sourced instances. Additional labels were
automatically assigned to detect specific types of content -harmful or toxic content- and to assign preliminary indicators of undesired qualities -very
short documents, high density of symbols, etc.- which were used for filtering instances.
**Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.**
No significant information is missing from the instances.
**Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit.**
Instances are related through shared metadata, such as source and language identifiers.
**Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.**
The dataset is randomly divided into training, validation and test sets, where the validation and test sets are each 1% of the total corpus.
**Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.**
Despite removing duplicated instances within each source, redundancy remains at the paragraph and sentence levels, particularly in web-sourced
instances where search engine optimization techniques and templates contribute to repeated textual patterns. Some instances may be also duplicated
across sources due to format variations.
**Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.**
The dataset is self-contained and does not rely on external resources.
**Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description.**
The dataset does not contain confidential data.
**Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. If the dataset does not relate to people, you may skip the remaining questions in this section.**
The dataset includes web-crawled content, which may overrepresent pornographic material across languages (Kreutzer et al., 2022). Although
pre-processing techniques were applied to mitigate offensive content, the heterogeneity and scale of web-sourced data make exhaustive
filtering challenging, which makes it next to impossible to identify all adult content without falling into excessive filtering, which may
negatively influence certain demographic groups (Dodge et al., 2021).
**Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.**
The dataset does not explicitly identify any subpopulations.
**Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how.**
Web-sourced instances in the dataset may contain personally identifiable information (PII) that is publicly available on the Web, such as names,
IP addresses, email addresses, and phone numbers. While it would be possible to indirectly identify individuals through the combination of multiple
data points, the nature and scale of web data makes it difficult to parse such information. In any case, efforts are made to filter or anonymize
sensitive data (Mina et al., 2024), but some identifiable information may remain in the dataset.
**Does the dataset contain data that might be considered sensitive in any way? If so, please provide a description.**
Given that the dataset includes web-sourced content and other publicly available documents, instances may inadvertently reveal financial
information, health-related details, or forms of government identification, such as social security numbers (Subramani et al., 2023),
especially if the content originates from less-regulated sources or user-generated platforms.
#### Collection Process
**How was the data collected?**
This dataset is constituted by combining several sources, whose acquisition methods can be classified into three groups:
- Web-sourced datasets with some preprocessing available under permissive license.
- Domain-specific or language-specific raw crawls.
- Manually curated data obtained through collaborators, data providers (by means of legal assignment agreements) or open source projects (e.g. CATalog).
**What mechanisms or procedures were used to collect the data? How were these mechanisms or procedures validated?**
The data collection process was carried out using three different mechanisms, each corresponding to one of the groups defined in the previous answer. The specific methods used and their respective validation procedures are outlined below:
- Open Direct Download: Data were obtained directly from publicly accessible sources, such as websites or repositories that provide open data downloads. We validate the data with a data integrity check, which ensures that the downloaded files are complete, uncorrupted and in the expected format and structure.
- Ad hoc scrapers or crawlers: Custom web scraping scripts or crawlers were used to extract data from various online sources where direct downloads were not available. These scripts navigate web pages, extract relevant data and store it in a structured format. We validate this method with software unit tests to evaluate the functionality of individual components of the scraping programs, checking for errors or unexpected behaviour. In addition, data integrity tests were performed to verify that the collected data remained complete throughout the extraction and storage process.
- Direct download via FTP, SFTP, API or S3: Some datasets were acquired using secure transfer protocols such as FTP (File Transfer Protocol), SFTP (Secure File Transfer Protocol), or API (Application Programming Interface) requests from cloud storage services such as Amazon S3. As with the open direct download method, data integrity tests were used to validate the completeness of the files to ensure that the files were not altered or corrupted during the transfer process.
**If the dataset is a sample from a larger set, what was the sampling strategy?**
The sampling strategy was to use the whole dataset resulting from the filtering explained in the 'preprocessing/cleaning/labelling' section,
with the particularity that an upsampling of 2 (i.e. twice the probability of sampling a document) was performed for the co-official languages
of Spain (Spanish, Catalan, Galician, Basque), and a downsampling of 1/2 was applied for code (half the probability of sampling a code document,
evenly distributed among all programming languages).
**Who was involved in the data collection process and how were they compensated?**
This data is generally extracted, filtered and sampled by automated processes. The code required to run these processes has been developed entirely
by members of the Language Technologies data team, or otherwise obtained from open-source software. Furthermore, there has been no monetary
consideration for acquiring data from suppliers.
**Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances? If not, please describe the timeframe in which the data associated with the instances was created.**
Data were acquired and processed from April 2023 to April 2024. However, as mentioned, much data has been obtained from open projects such
as Common Crawl, which contains data from 2014, so it is the end date (04/2024) rather than the start date that is important.
**Were any ethical review processes conducted? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.**
No particular ethical review process has been carried out as the data is mostly open and not particularly sensitive. However, we have an
internal evaluation team and a bias team to monitor ethical issues. In addition, we work closely with ‘Observatori d'Ètica en Intel·ligència
Artificial’ (OEIAC) and ‘Agencia Española de Supervisión de la Inteligencia Artificial’ (AESIA) to audit the processes we carry out from an
ethical and legal point of view, respectively.
#### Preprocessing
**Was any preprocessing/cleaning/labeling of the data done? If so, please provide a description. If not, you may skip the remaining questions in this section.**
No changes were made to the content of individual text document instances. However, the web-sourced documents underwent a filtering process based on specific criteria along two key dimensions:
- Quality filtering: The text processing pipeline CURATE (Palomar et. al, 2024) calculates a quality score for each document based on a set of filtering criteria that identify undesirable textual characteristics. Any document with a score below the 0.8 threshold was excluded from the dataset.
- Harmful or adult content filtering: To reduce the amount of harmful or inappropriate material in the dataset, documents from Colossal OSCAR were filtered using the Ungoliant pipeline (Abadji et al., 2021), which uses the 'harmful\_pp' field, a perplexity-based score generated by a language model.
**Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data? If so, please provide a link or other access point to the “raw” data.**
The original raw data was not kept.
**Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point.**
Yes, the preprocessing and filtering software is open-sourced. The [CURATE](https://github.com/langtech-bsc/CURATE) pipeline was used for CATalog and other curated datasets,
and the [Ungoliant](https://github.com/oscar-project/ungoliant) pipeline was used for the OSCAR project.
#### Uses
**Has the dataset been used for any tasks already? If so, please provide a description.**
Pre-train the Salamandra model family.
**What (other) tasks could the dataset be used for?**
The data can be used primarily to pre-train other language models, which can then be used for a wide range of use cases. The dataset could
also be used for other tasks such as fine-tuning language models, cross-lingual NLP tasks, machine translation, domain-specific text
generation, and language-specific data analysis.
**Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? Is there anything a dataset consumer could do to mitigate these risks or harms?**
Web-crawled content is over-represented with standard language varieties, impacting language model performance for minority languages.
Language diversity in data is crucial to avoid bias, especially in encoding non-standard dialects, preventing the exclusion of demographic
groups. Moreover, despite legal uncertainties in web-scraped data, we prioritize permissive licenses and privacy protection measures,
acknowledging the challenges posed by personally identifiable information (PII) within large-scale datasets. Our ongoing efforts aim to
address privacy concerns and contribute to a more inclusive linguistic dataset.
**Are there tasks for which the dataset should not be used?**
-
#### Distribution
**Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset was created? If so, please provide a description.**
The dataset will not be released or distributed to third parties. Any related question to distribution is omitted in this section.
#### Maintenance
**Who will be supporting/hosting/maintaining the dataset?**
The dataset will be hosted by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center (BSC). The team will ensure
regular updates and monitor the dataset for any issues related to content integrity, legal compliance, and bias for the sources they are
responsible for.
**How can the owner/curator/manager of the dataset be contacted?**
The data owner may be contacted with the email address [email protected].
**Will the dataset be updated?**
The dataset will not be updated.
**If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances? If so, please describe these limits and explain how they will be enforced.**
The dataset does not keep sensitive data that could allow direct identification of individuals, apart from the data that is publicly available in
web-sourced content. Due to the sheer volume and diversity of web data, it is not feasible to notify individuals or manage data retention on an
individual basis. However, efforts are made to mitigate the risks associated with sensitive information through pre-processing and filtering to
remove identifiable or harmful content. Despite these measures, vigilance is maintained to address potential privacy and ethical issues.
**Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers.**
Since the dataset will not be updated, only the final version will be kept.
**If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?**
The dataset does not allow for external contributions.
</details>
---
## Evaluation
### Gold-standard benchmarks
Evaluation is done using the Language Model Evaluation Harness (Gao et al., 2024). We evaluate on a set of tasks taken from [SpanishBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/spanish_bench), [CatalanBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/catalan_bench), [BasqueBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/basque_bench) and [GalicianBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/galician_bench). We also use English tasks already available on the LM Evaluation Harness. These benchmarks include both new and existing tasks and datasets. In the tables below, we include the results in a selection of evaluation datasets that represent model's performance across a variety of tasks within these benchmarks.
We only use tasks that are either human generated, human translated, or with a strong human-in-the-loop (i.e., machine translation followed by professional revision or machine generation followed by human revision and annotation). This is the reason behind the variety in number of tasks reported across languages. As more tasks that fulfill these requirements are published, we will update the presented results. We also intend to expand the evaluation to other languages, as long as the datasets meet our quality standards.
During the implementation of the evaluation we observed a series of issues worth considering when replicating and interpreting the results presented. These issues include ≈1.5% variances in performance in some tasks depending on the version of the `transformers` library used, and depending on the use (or lack of use) of tensor parallelism when loading a model. When implementing existing tasks, we carry out a comprehensive quality evaluation of the dataset, the Harness task itself, and what kind of input models see during evaluation. Our implementation (see links above) addresses multiple existing problems such as errors in datasets and prompts, and lack of pre-processing. All this means that results will vary if using other Harness implementations, and may slightly vary depending on the replication setup.
It should be noted that these results are subject to all the drawbacks of every current gold-standard evaluation, and that the figures do not fully represent the model's capabilities and potential. We thus advise caution when reading and interpreting the results.
A full list of results compared to other baselines, a discussion of the model's performance across tasks and its implications, and details regarding problem-solving with task implementation will soon be available in the technical report.
All results reported below are on a 5-shot setting.
#### Spanish
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td>Commonsense Reasoning</td>
<td>xstorycloze_es</td>
<td>acc</td>
<td>79.5</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli_es</td>
<td>acc</td>
<td>64.8</td>
</tr>
<tr>
<td>xnli_es</td>
<td>acc</td>
<td>50.4</td>
</tr>
<tr>
<td>Paraphrasing</td>
<td>paws_es</td>
<td>acc</td>
<td>63.8</td>
</tr>
<tr>
<td>QA</td>
<td>xquad_es</td>
<td>acc</td>
<td>73.4</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_es</td>
<td>bleu</td>
<td>25.9</td>
</tr>
</tbody>
</table>
#### Catalan
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Commonsense Reasoning</td>
<td>copa_ca</td>
<td>acc</td>
<td>86.0</td>
</tr>
<tr>
<td>xstorycloze_ca</td>
<td>acc</td>
<td>80.0</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli_ca</td>
<td>acc</td>
<td>70.0</td>
</tr>
<tr>
<td>xnli_ca</td>
<td>acc</td>
<td>50.7</td>
</tr>
<tr>
<td rowspan="2">Paraphrasing</td>
<td>parafraseja</td>
<td>acc</td>
<td>67.8</td>
</tr>
<tr>
<td>paws_ca</td>
<td>acc</td>
<td>67.5</td>
</tr>
<tr>
<td rowspan="5">QA</td>
<td>arc_ca_easy</td>
<td>acc</td>
<td>81.0</td>
</tr>
<tr>
<td>arc_ca_challenge</td>
<td>acc</td>
<td>53.0</td>
</tr>
<tr>
<td>openbookqa_ca</td>
<td>acc</td>
<td>41.6</td>
</tr>
<tr>
<td>piqa_ca</td>
<td>acc</td>
<td>75.8</td>
</tr>
<tr>
<td>siqa_ca</td>
<td>acc</td>
<td>53.9</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_ca</td>
<td>bleu</td>
<td>33.7</td>
</tr>
</tbody></table>
#### Basque
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Commonsense Reasoning</td>
<td>xcopa_eu</td>
<td>acc</td>
<td>78.8</td>
</tr>
<tr>
<td>xstorycloze_eu</td>
<td>acc</td>
<td>72.2</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli_eu</td>
<td>acc</td>
<td>66.2</td>
</tr>
<tr>
<td>xnli_eu</td>
<td>acc</td>
<td>45.9</td>
</tr>
<tr>
<td rowspan="3">QA</td>
<td>eus_exams</td>
<td>acc</td>
<td>61.5</td>
</tr>
<tr>
<td>eus_proficiency</td>
<td>acc</td>
<td>60.4</td>
</tr>
<tr>
<td>eus_trivia</td>
<td>acc</td>
<td>67.2</td>
</tr>
<tr>
<td>Reading Comprehension</td>
<td>eus_reading</td>
<td>acc</td>
<td>61.1</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_eu</td>
<td>bleu</td>
<td>21.3</td>
</tr>
</tbody></table>
#### Galician
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Paraphrasing</td>
<td>parafrases_gl</td>
<td>acc</td>
<td>60.2</td>
</tr>
<tr>
<td>paws_gl</td>
<td>acc</td>
<td>63.0</td>
</tr>
<tr>
<td>QA</td>
<td>openbookqa_gl</td>
<td>acc</td>
<td>36.6</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_gl</td>
<td>bleu</td>
<td>31.2</td>
</tr>
</tbody>
</table>
#### English
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Commonsense Reasoning</td>
<td>copa</td>
<td>acc</td>
<td>94.0</td>
</tr>
<tr>
<td>xstorycloze_en</td>
<td>acc</td>
<td>83.2</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli</td>
<td>acc</td>
<td>67.6</td>
</tr>
<tr>
<td>xnli_en</td>
<td>acc</td>
<td>57.0</td>
</tr>
<tr>
<td>Paraphrasing</td>
<td>paws *</td>
<td>acc</td>
<td>68.5</td>
</tr>
<tr>
<td rowspan="6">QA</td>
<td>arc_easy</td>
<td>acc</td>
<td>86.5</td>
</tr>
<tr>
<td>arc_challenge</td>
<td>acc</td>
<td>59.4</td>
</tr>
<tr>
<td>openbookqa</td>
<td>acc</td>
<td>38.4</td>
</tr>
<tr>
<td>piqa</td>
<td>acc</td>
<td>81.7</td>
</tr>
<tr>
<td>social_iqa</td>
<td>acc</td>
<td>53.8</td>
</tr>
<tr>
<td>xquad_en </td>
<td>acc</td>
<td>80.7</td>
</tr>
</tbody></table>
\* Current LM Evaluation Harness implementation is lacking correct pre-processing. These results are obtained with adequate pre-processing.
### Long Context Evaluation
To assess the long-context capabilities of our model, we conduct a "needle in a haystack" test with the following configuration:
- **Needle Phrase**: *"The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day."*
- **Retrieval Question**: *"The best thing to do in San Francisco is"*
- **Evaluator**: [prometheus-8x7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-8x7b-v2.0), used as the evaluation judge.

---
## Ethical Considerations and Limitations
We examine the presence of undesired societal and cognitive biases present in this model using different benchmarks. For societal biases, we test performance using our Spanish version of the BBQ dataset (Parrish et al., 2022). We report that while accuracy in disambiguated settings is relatively high for a base model, the model performs very poorly in ambiguous settings. Further examination of the differences in accuracy scores as described in Jin et al. (2024) reveals a low-to-moderate alignment between the model's responses and societal biases. These largely vanish in disambiguated setting. Our analyses on societal biases show that while these biases are capable of interfering with model performance as expressed in the results on the BBQ dataset, their interference with task performance is somewhat limited given the results on the disambiguated dataset. We highlight that our analyses of these biases are by no means exhaustive and are limited by the relative scarcity of adequate resources in all languages present in the training data. We aim to gradually extend and expand our analyses in future work.
Our cognitive bias analysis focuses on positional effects in 0-shot settings, and majority class bias in few-shot settings. For positional effects, we leverage the ARC Multiple Choice Question dataset (Clark et al., 2018). We observe weak primacy effects, whereby the model shows a preference for answers towards the beginning of the list of provided answers. We measure the effects of majority class effects in few-shot settings using SST-2 (Socher et al., 2013). We detect significant effects, albeit extremely weak ones, implying that outputs are generally robust against variations in prompt format, and order.
We highlight that these results can be expected from a pretrained model that has not yet been instruction-tuned or aligned. These tests are performed in order to show the biases the model may contain. We urge developers to take them into account and perform safety testing and tuning tailored to their specific applications of the model.
---
## Additional information
### Author
The Language Technologies Lab from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <[email protected]>.
### Copyright
Copyright(c) 2025 by Language Technologies Lab, Barcelona Supercomputing Center.
### Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Modelos del Lenguaje.
This work has been promoted and supported by the Government of Catalonia through the Aina Project.
### Acknowledgements
This project has benefited from the contributions of numerous teams and institutions, mainly through data contributions, knowledge transfer or technical support.
We are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, Fundación Dialnet, and the ‘Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)’ of the University of Las Palmas de Gran Canaria. Many other institutions have been involved in the project. Our thanks to Òmnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, Racó Català, Vilaweb, ACN, Nació Digital, El món and Aquí Berguedà. We thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration.
We would also like to give special thanks to the NVIDIA team, with whom we have met regularly, specially to: Ignacio Sarasua, Adam Henryk Grzywaczewski, Oleg Sudakov, Sergio Perez, Miguel Martinez, Felipes Soares and Meriem Bendris. Their constant support has been especially appreciated throughout the entire process.
Their valuable efforts have been instrumental in the development of this work.
### Disclaimer
Be aware that the model may contain biases or other unintended distortions.
When third parties deploy systems or provide services based on this model, or use the model themselves,
they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations,
including those governing the use of Artificial Intelligence.
The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use.
### Citation
```
@misc{gonzalezagirre2025salamandratechnicalreport,
title={Salamandra Technical Report},
author={Aitor Gonzalez-Agirre and Marc Pàmies and Joan Llop and Irene Baucells and Severino Da Dalt and Daniel Tamayo and José Javier Saiz and Ferran Espuña and Jaume Prats and Javier Aula-Blasco and Mario Mina and Adrián Rubio and Alexander Shvets and Anna Sallés and Iñaki Lacunza and Iñigo Pikabea and Jorge Palomar and Júlia Falcão and Lucía Tormo and Luis Vasquez-Reina and Montserrat Marimon and Valle Ruíz-Fernández and Marta Villegas},
year={2025},
eprint={2502.08489},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.08489},
}
```
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Index
|Model|Base|Instruct|
|:---:|:---:|:---:|
|2b| [Link](https://huggingface.co/BSC-LT/salamandra-2b) | [Link](https://huggingface.co/BSC-LT/salamandra-2b-instruct) |
|7b| [Link](https://huggingface.co/BSC-LT/salamandra-7b) | [Link](https://huggingface.co/BSC-LT/salamandra-7b-instruct) |
|40b| [Link](https://huggingface.co/BSC-LT/ALIA-40b) | WiP | |
samcomber/ppo-SnowballTarget | samcomber | 2025-05-27T08:58:25Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2025-05-27T08:58:18Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: samcomber/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mesolitica/Malaysian-Llama-3.1-8B-Instruct | mesolitica | 2025-05-27T08:58:09Z | 18 | 0 | null | [
"safetensors",
"llama",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-05-03T12:22:54Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Llama-3.1-8B-Instruct
Continue finetuning https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-llama3.1-8b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/llama3
## Benchmark
### MalayMMLU
#### Probability next tokens
Based on 0-shot official MalayMMLU First token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-Llama-3.1-8B-Instruct 61.522718 0shot True STEM
1 Malaysian-Llama-3.1-8B-Instruct 61.784351 0shot True Language
2 Malaysian-Llama-3.1-8B-Instruct 60.610003 0shot True Social science
3 Malaysian-Llama-3.1-8B-Instruct 60.254258 0shot True Others
4 Malaysian-Llama-3.1-8B-Instruct 62.434585 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Llama-3.1-8B-Instruct
Metric : first
Shot : 0shot
average accuracy 61.276999958699875
accuracy for STEM 61.522717969709376
accuracy for Language 61.784351145038165
accuracy for Social science 60.61000289100896
accuracy for Others 60.254257615735185
accuracy for Humanities 62.43458475540387
```
While the original model,
```
Model Accuracy shot by_letter category
0 Llama-3.1-8B-Instruct 64.019648 0shot True STEM
1 Llama-3.1-8B-Instruct 65.505725 0shot True Language
2 Llama-3.1-8B-Instruct 62.604799 0shot True Social science
3 Llama-3.1-8B-Instruct 62.197170 0shot True Others
4 Llama-3.1-8B-Instruct 67.167235 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Llama-3.1-8B-Instruct
Metric : first
Shot : 0shot
average accuracy 64.25886920249452
accuracy for STEM 64.0196479738027
accuracy for Language 65.5057251908397
accuracy for Social science 62.60479907487713
accuracy for Others 62.197169585032384
accuracy for Humanities 67.16723549488054
```
#### First token match using vLLM
Based on 0-shot exact first token match using vLLM Guided Decoding,
```
Model Accuracy shot category
0 Malaysian-Llama-3.1-8B-Instruct 58.616455 0 STEM
1 Malaysian-Llama-3.1-8B-Instruct 60.178117 0 Language
2 Malaysian-Llama-3.1-8B-Instruct 57.213067 0 Social science
3 Malaysian-Llama-3.1-8B-Instruct 56.896138 0 Others
4 Malaysian-Llama-3.1-8B-Instruct 59.704209 0 Humanities
Model : Malaysian-Llama-3.1-8B-Instruct
Metric : full
Shot : 0
average accuracy 58.5222814190724
accuracy for STEM 58.616455178059766
accuracy for Language 60.17811704834606
accuracy for Social science 57.213067360508816
accuracy for Others 56.89613816262893
accuracy for Humanities 59.70420932878271
```
While the original model,
```
Model Accuracy shot category
0 Llama-3.1-8B-Instruct 58.739255 0 STEM
1 Llama-3.1-8B-Instruct 61.577608 0 Language
2 Llama-3.1-8B-Instruct 57.487713 0 Social science
3 Llama-3.1-8B-Instruct 56.872152 0 Others
4 Llama-3.1-8B-Instruct 63.890785 0 Humanities
Model : Llama-3.1-8B-Instruct
Metric : full
Shot : 0
average accuracy 59.73237517036303
accuracy for STEM 58.73925501432665
accuracy for Language 61.57760814249363
accuracy for Social science 57.487713211910965
accuracy for Others 56.872151595106736
accuracy for Humanities 63.89078498293516
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
mesolitica/Malaysian-Llama-3.2-3B-Instruct | mesolitica | 2025-05-27T08:58:00Z | 29 | 0 | null | [
"safetensors",
"llama",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-05-03T12:23:51Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Llama-3.2-3B-Instruct
Continue finetuning https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-llama3.2-3b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/llama3
## Benchmark
### MalayMMLU
#### Probability next tokens
Based on 0-shot official MalayMMLU First token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-Llama-3.2-3B-Instruct 57.634056 0shot True STEM
1 Malaysian-Llama-3.2-3B-Instruct 59.351145 0shot True Language
2 Malaysian-Llama-3.2-3B-Instruct 57.559988 0shot True Social science
3 Malaysian-Llama-3.2-3B-Instruct 57.303910 0shot True Others
4 Malaysian-Llama-3.2-3B-Instruct 60.022753 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Llama-3.2-3B-Instruct
Metric : first
Shot : 0shot
average accuracy 58.43555115020857
accuracy for STEM 57.63405648792468
accuracy for Language 59.35114503816794
accuracy for Social science 57.55998843596415
accuracy for Others 57.30390981050611
accuracy for Humanities 60.02275312855517
```
While the original model,
```
Model Accuracy shot by_letter category
0 Llama-3.2-3B-Instruct 56.733524 0shot True STEM
1 Llama-3.2-3B-Instruct 58.460560 0shot True Language
2 Llama-3.2-3B-Instruct 54.206418 0shot True Social science
3 Llama-3.2-3B-Instruct 52.554569 0shot True Others
4 Llama-3.2-3B-Instruct 60.659841 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Llama-3.2-3B-Instruct
Metric : first
Shot : 0shot
average accuracy 56.453145004749516
accuracy for STEM 56.73352435530086
accuracy for Language 58.460559796437664
accuracy for Social science 54.20641803989592
accuracy for Others 52.554569441112974
accuracy for Humanities 60.659840728100114
```
#### First token match using vLLM
Based on 0-shot exact first token match using vLLM,
```
Model Accuracy shot category
0 Malaysian-Llama-3.2-3B-Instruct 51.944331 0 STEM
1 Malaysian-Llama-3.2-3B-Instruct 50.795165 0 Language
2 Malaysian-Llama-3.2-3B-Instruct 52.732003 0 Social science
3 Malaysian-Llama-3.2-3B-Instruct 52.026865 0 Others
4 Malaysian-Llama-3.2-3B-Instruct 54.539249 0 Humanities
Model : Malaysian-Llama-3.2-3B-Instruct
Metric : full
Shot : 0
average accuracy 52.35617230413414
accuracy for STEM 51.94433074089234
accuracy for Language 50.795165394402034
accuracy for Social science 52.73200346921075
accuracy for Others 52.02686495562485
accuracy for Humanities 54.53924914675768
```
While the original model,
```
Model Accuracy shot category
0 Llama-3.2-3B-Instruct 50.511666 0 STEM
1 Llama-3.2-3B-Instruct 49.825064 0 Language
2 Llama-3.2-3B-Instruct 48.352125 0 Social science
3 Llama-3.2-3B-Instruct 48.213001 0 Others
4 Llama-3.2-3B-Instruct 51.990899 0 Humanities
Model : Llama-3.2-3B-Instruct
Metric : full
Shot : 0
average accuracy 49.58906372609755
accuracy for STEM 50.51166598444535
accuracy for Language 49.82506361323155
accuracy for Social science 48.35212489158716
accuracy for Others 48.21300071959703
accuracy for Humanities 51.990898748577926
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
mesolitica/Malaysian-Llama-3.2-1B-Instruct | mesolitica | 2025-05-27T08:57:52Z | 29 | 0 | null | [
"safetensors",
"llama",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-05-03T12:24:03Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Llama-3.2-1B-Instruct
Continue finetuning https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-llama3.2-1b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/llama3
## Benchmark
#### Probability next tokens
Based on 0-shot official MalayMMLU First token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-Llama-3.2-1B-Instruct 42.325010 0shot True STEM
1 Malaysian-Llama-3.2-1B-Instruct 38.438295 0shot True Language
2 Malaysian-Llama-3.2-1B-Instruct 41.037872 0shot True Social science
3 Malaysian-Llama-3.2-1B-Instruct 44.399136 0shot True Others
4 Malaysian-Llama-3.2-1B-Instruct 42.184300 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Llama-3.2-1B-Instruct
Metric : first
Shot : 0shot
average accuracy 41.2794779663817
accuracy for STEM 42.32501023331969
accuracy for Language 38.4382951653944
accuracy for Social science 41.03787221740387
accuracy for Others 44.3991364835692
accuracy for Humanities 42.184300341296925
```
While the original model,
```
Model Accuracy shot by_letter category
0 Llama-3.2-1B-Instruct 36.430618 0shot True STEM
1 Llama-3.2-1B-Instruct 37.420483 0shot True Language
2 Llama-3.2-1B-Instruct 36.773634 0shot True Social science
3 Llama-3.2-1B-Instruct 37.514992 0shot True Others
4 Llama-3.2-1B-Instruct 41.319681 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Llama-3.2-1B-Instruct
Metric : first
Shot : 0shot
average accuracy 37.85982736546483
accuracy for STEM 36.43061809250921
accuracy for Language 37.420483460559794
accuracy for Social science 36.773633998265396
accuracy for Others 37.51499160470137
accuracy for Humanities 41.31968145620023
```
#### First token match using vLLM
Based on 0-shot exact first token match using vLLM Guided Decoding,
```
Model Accuracy shot category
0 Malaysian-Llama-3.2-1B-Instruct 39.869014 0 STEM
1 Malaysian-Llama-3.2-1B-Instruct 39.662850 0 Language
2 Malaysian-Llama-3.2-1B-Instruct 41.211333 0 Social science
3 Malaysian-Llama-3.2-1B-Instruct 42.432238 0 Others
4 Malaysian-Llama-3.2-1B-Instruct 46.029579 0 Humanities
Model : Malaysian-Llama-3.2-1B-Instruct
Metric : full
Shot : 0
average accuracy 41.7585594515343
accuracy for STEM 39.86901350798199
accuracy for Language 39.662849872773535
accuracy for Social science 41.211332755131544
accuracy for Others 42.432237946749815
accuracy for Humanities 46.02957906712173
```
While the original model,
```
Model Accuracy shot category
0 Llama-3.2-1B-Instruct 36.553418 0 STEM
1 Llama-3.2-1B-Instruct 32.395038 0 Language
2 Llama-3.2-1B-Instruct 38.493784 0 Social science
3 Llama-3.2-1B-Instruct 39.002159 0 Others
4 Llama-3.2-1B-Instruct 38.748578 0 Humanities
Model : Llama-3.2-1B-Instruct
Metric : full
Shot : 0
average accuracy 36.84797422872011
accuracy for STEM 36.55341792877609
accuracy for Language 32.395038167938935
accuracy for Social science 38.49378433073142
accuracy for Others 39.002158791076994
accuracy for Humanities 38.7485779294653
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
mesolitica/Malaysian-Qwen2.5-72B-Instruct | mesolitica | 2025-05-27T08:55:47Z | 99 | 0 | null | [
"safetensors",
"qwen2",
"ms",
"en",
"zh",
"ta",
"region:us"
]
| null | 2025-04-27T00:44:28Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Qwen 2.5 72B Instruct
Continue finetuning https://huggingface.co/Qwen/Qwen2.5-72B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.
## Improvement
1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.
## Training session
Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context.
## How we train
1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`.
2. 128 Rank with alpha 256, or alpha of 2.0
3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
4. Chunk CCE loss for LoRA.
5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-qwen2.5-72b-malaysian-8k?nw=nwuserhuseinzol05
Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5
## Benchmark
### MalayMMLU
#### Probability next tokens
Based on 0-shot official MalayMMLU First token accuracy,
```
Model Accuracy shot by_letter category
0 Malaysian-Qwen2.5-72B-Instruct 81.620958 0shot True STEM
1 Malaysian-Qwen2.5-72B-Instruct 80.820611 0shot True Language
2 Malaysian-Qwen2.5-72B-Instruct 77.536860 0shot True Social science
3 Malaysian-Qwen2.5-72B-Instruct 76.900935 0shot True Others
4 Malaysian-Qwen2.5-72B-Instruct 82.730375 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Qwen2.5-72B-Instruct
Metric : first
Shot : 0shot
average accuracy 79.63490686821129
accuracy for STEM 81.62095783872289
accuracy for Language 80.82061068702289
accuracy for Social science 77.53686036426713
accuracy for Others 76.90093547613337
accuracy for Humanities 82.73037542662117
```
While the original model,
```
Model Accuracy shot by_letter category
0 Qwen2.5-72B-Instruct 80.884159 0shot True STEM
1 Qwen2.5-72B-Instruct 79.103053 0shot True Language
2 Qwen2.5-72B-Instruct 75.802255 0shot True Social science
3 Qwen2.5-72B-Instruct 75.053970 0shot True Others
4 Qwen2.5-72B-Instruct 79.977247 0shot True Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Qwen2.5-72B-Instruct
Metric : first
Shot : 0shot
average accuracy 77.80118118366167
accuracy for STEM 80.88415882112157
accuracy for Language 79.1030534351145
accuracy for Social science 75.80225498699046
accuracy for Others 75.05396977692492
accuracy for Humanities 79.97724687144482
```
#### First token match using vLLM
Based on 0-shot exact first token match using vLLM Guided Decoding,
```
Model Accuracy shot category
0 Malaysian-Qwen2.5-72B-Instruct 80.229226 0 STEM
1 Malaysian-Qwen2.5-72B-Instruct 78.101145 0 Language
2 Malaysian-Qwen2.5-72B-Instruct 75.252963 0 Social science
3 Malaysian-Qwen2.5-72B-Instruct 74.358359 0 Others
4 Malaysian-Qwen2.5-72B-Instruct 80.477816 0 Humanities
Model : Malaysian-Qwen2.5-72B-Instruct
Metric : full
Shot : 0
average accuracy 77.28905959608475
accuracy for STEM 80.22922636103151
accuracy for Language 78.10114503816794
accuracy for Social science 75.25296328418618
accuracy for Others 74.35835931878148
accuracy for Humanities 80.4778156996587
```
While the original model,
```
Model Accuracy shot category
0 Qwen2.5-72B-Instruct 81.129758 0 STEM
1 Qwen2.5-72B-Instruct 78.975827 0 Language
2 Qwen2.5-72B-Instruct 75.397514 0 Social science
3 Qwen2.5-72B-Instruct 75.077956 0 Others
4 Qwen2.5-72B-Instruct 79.954494 0 Humanities
Model : Qwen2.5-72B-Instruct
Metric : full
Shot : 0
average accuracy 77.67728079957048
accuracy for STEM 81.12975849365534
accuracy for Language 78.97582697201018
accuracy for Social science 75.39751373229257
accuracy for Others 75.0779563444471
accuracy for Humanities 79.95449374288964
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
mesolitica/Malaysian-Qwen2.5-72B-Instruct-FP8 | mesolitica | 2025-05-27T08:55:25Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"ms",
"en",
"zh",
"ta",
"compressed-tensors",
"region:us"
]
| null | 2025-05-12T06:41:45Z | ---
language:
- ms
- en
- zh
- ta
---
# Malaysian Qwen 2.5 72B Instruct Dynamic FP8
This is FP8 Dynamic Quantization (A8W8) for https://huggingface.co/mesolitica/Malaysian-Qwen2.5-72B-Instruct
## Benchmark
### MalayMMLU
Based on 0-shot exact first token match vLLM,
```
Model Accuracy shot category
0 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 79.819894 0 STEM
1 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 78.323791 0 Language
2 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 74.978317 0 Social science
3 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 74.238426 0 Others
4 Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic 79.567691 0 Humanities
Model : Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic
Metric : full
Shot : 0
average accuracy 77.04125882790237
accuracy for STEM 79.81989357347523
accuracy for Language 78.32379134860051
accuracy for Social science 74.97831743278404
accuracy for Others 74.23842648117055
accuracy for Humanities 79.56769055745166
```
## Acknowledgement
Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node! |
kueltzho/mistral-small-3.1-instruct-2503-trl-chat-ocr | kueltzho | 2025-05-27T08:55:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:finetune:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-23T13:57:00Z | ---
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
library_name: transformers
model_name: mistral-small-3.1-instruct-2503-trl-chat-ocr
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mistral-small-3.1-instruct-2503-trl-chat-ocr
This model is a fine-tuned version of [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kueltzho/mistral-small-3.1-instruct-2503-trl-chat-ocr", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kaiu/mistral-small-3.1-24b-instruct-2503-trl-sft-ChartQA/runs/6ok8zisz)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
leobianco/npov_RM_model_google_seed_051179_SYN_LLM_true_SYN_STRUCT_false_epochs_1_lr_1e-3_lora_16 | leobianco | 2025-05-27T08:54:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T08:48:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SeongeonKim/Qwen2.5-0.5B-schoolmath_v3 | SeongeonKim | 2025-05-27T08:54:10Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T07:24:22Z | ---
base_model: unsloth/qwen2-0.5b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SeongeonKim
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-0.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aamijar/Llama-2-7b-hf-lora-r8-boolq-portlora-epochs1 | aamijar | 2025-05-27T08:53:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T08:53:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LarryAIDraw/BismarckPDCAMEq6v1_1_AL | LarryAIDraw | 2025-05-27T08:53:01Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-05-27T06:43:32Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/866850/characterxl-pony-bismarck-azur-lane |
AI-ISL/DeepSeek-R1-Distill-Qwen-7B-SP | AI-ISL | 2025-05-27T08:52:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chain-of-thought",
"safety",
"alignment",
"reasoning",
"large-language-model",
"conversational",
"arxiv:2505.14667",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T05:24:17Z | ---
license: apache-2.0
tags:
- chain-of-thought
- safety
- alignment
- reasoning
- large-language-model
library_name: transformers
inference: true
---
# SAFEPATH-R-7B
This model is the **SAFEPATH-aligned version of DeepSeek-R1-Distill-Qwen-7B**, fine-tuned using prefix-only safety priming.
## Model Description
SAFEPATH applies a minimal alignment technique by inserting the phrase: *Let's think about safety first* (Safety Primer) at the beginning of the reasoning block. This encourages the model to engage in safer reasoning without reducing its reasoning performance.
- 🔐 **Improved Safety**: Reduces harmful outputs (e.g., StrongReject, BeaverTails) and is robust to jailbreak attacks
- 🧠 **Preserved Reasoning**: Maintains accuracy on MATH500, GPQA, and AIME24
- ⚡ **Efficiency**: Fine-tuned with only 100 steps
## Intended Use
This model is intended for research in:
- Safety alignment in Large Reasoning Models (LRMs)
- Robust reasoning under adversarial settings
- Chain-of-thought alignment studies
For details, see our [paper](https://arxiv.org/pdf/2505.14667).
## Overview Results
<p align="left">
<img src="https://github.com/AI-ISL/AI-ISL.github.io/blob/main/static/images/safepath/main_results.png?raw=true" width="800"/>
</p> |
MixBanana/Test | MixBanana | 2025-05-27T08:52:16Z | 0 | 1 | null | [
"region:us"
]
| null | 2025-05-27T08:46:57Z | # safe_package
A safe example Python package that says hello. |
Varinder2110/cd2e0d59-893e-4209-a98d-93531f758770 | Varinder2110 | 2025-05-27T08:51:02Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T07:42:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Cd2E0D59 893E 4209 A98D 93531F758770
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/cd2e0d59-893e-4209-a98d-93531f758770/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/cd2e0d59-893e-4209-a98d-93531f758770', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/cd2e0d59-893e-4209-a98d-93531f758770/discussions) to add images that show off what you’ve made with this LoRA.
|
supercylin/qwen3-8b-chat | supercylin | 2025-05-27T08:46:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-27T08:45:31Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** supercylin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
URSA-MATH/URSA-8B-PS-GRPO | URSA-MATH | 2025-05-27T08:45:48Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"ursa",
"text2text-generation",
"image-text-to-text",
"conversational",
"en",
"zh",
"dataset:URSA-MATH/MMathCoT-1M",
"arxiv:2501.04686",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-24T17:41:16Z | ---
datasets:
- URSA-MATH/MMathCoT-1M
language:
- en
- zh
license: apache-2.0
library_name: transformers
pipeline_tag: image-text-to-text
---
# URSA-8B-PS-GRPO
URSA-8B-PS-GRPO employs process-supervision grpo which proposed in our [paper](https://arxiv.org/pdf/2501.04686).
# Installation
```python
from huggingface_hub import snapshot_download
repo_id = "URSA-MATH/URSA-8B-PS-GRPO"
local_dir = YOUR_LOCAL_PATH
snapshot_path = snapshot_download(
repo_id=repo_id,
local_dir=local_dir,
revision="main",
cache_dir=None,
)
```
# Inference
We have adapted vLLM for URSA-8B. Please refer to the [GitHub](https://github.com/URSA-MATH/URSA-MATH) repository for quick inference implementation.
Besides, we have adapted evaluation on [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)!
# Citation
If you find our paper, model, or data helpful, please give this repo a star 🌟 and cite our article ✏️.
```
@article{luo2025ursa,
title={URSA: Understanding and Verifying Chain-of-thought Reasoning in Multimodal Mathematics},
author={Luo, Ruilin and Zheng, Zhuofan and Wang, Yifan and Yu, Yiyao and Ni, Xinzhe and Lin, Zicheng and Zeng, Jin and Yang, Yujiu},
journal={arXiv preprint arXiv:2501.04686},
year={2025}
}
```
``` |
samuelwillyanto1/gemma3-1b-lora-wiki | samuelwillyanto1 | 2025-05-27T08:45:03Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"license:gemma",
"region:us"
]
| null | 2025-05-27T08:23:23Z | ---
library_name: peft
license: gemma
base_model: google/gemma-3-1b-it
tags:
- generated_from_trainer
model-index:
- name: gemma3-1b-lora-wiki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma3-1b-lora-wiki
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
anirudhsrivastava/medgemma-4b-it-sft-lora-icmr-nirt-cxr | anirudhsrivastava | 2025-05-27T08:42:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T07:02:42Z | ---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-icmr-nirt-cxr
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-icmr-nirt-cxr
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="anirudhsrivastava/medgemma-4b-it-sft-lora-icmr-nirt-cxr", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bigband/HealerZoroaster | bigband | 2025-05-27T08:41:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T08:29:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
galennolan/indobertweet-indoemotion-5class | galennolan | 2025-05-27T08:39:17Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"emotion-classification",
"indonesian",
"indobertweet",
"id",
"dataset:PRDECT-ID",
"base_model:Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis",
"base_model:finetune:Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-26T08:12:23Z | ---
license: apache-2.0
language:
- id
library_name: transformers
tags:
- text-classification
- sentiment-analysis
- emotion-classification
- indonesian
- indobertweet
datasets:
- PRDECT-ID
metrics:
- accuracy
- f1
- precision
- recall
base_model:
- Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis
---
# IndoBERTweet untuk Klasifikasi Emosi Bahasa Indonesia (5 Label)
Model ini merupakan hasil *fine-tune lanjutan* dari [`Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis`](https://huggingface.co/Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis).
Awalnya model tersebut hanya mengenali 3 sentimen: **positive**, **negative** dan **neutral**.
Sekarang model ini dikembangkan lebih lanjut untuk mengenali **lima jenis emosi** dalam teks Bahasa Indonesia:
- `anger `
- `fear `
- `happy `
- `love `
- `sadness`
## 🎯 Tujuan
Model ini cocok digunakan untuk analisis emosi pada:
- Ulasan produk
- Komentar sosial media
- Respon pengguna aplikasi
- Teks pendek lain yang ditulis dalam Bahasa Indonesia
## Tentang Dataset
Fine-tuning menggunakan dataset PRDECT-ID (Produk Review Dataset for Emotion Classification Task - Indonesia). Dataset ini berisi ulasan produk berbahasa Indonesia dengan label emosi yang dideskripsikan sebagai berikut:
| Emosi | Deskripsi | Contoh |
|--------|-------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|
| anger | Mengandung kata-kata marah, komplain, kata kasar, tanda baca kapital | *"Barang jelek!!! tiga hari sudah pada lepas pinggirnya, barang mahal tapi kualitasnya jelek banget"* |
| fear | Mengandung kalimat peringatan, keraguan, pertanyaan terhadap produk/penjual/pengiriman | *"Saya sarankan buat video unboxing, hidupkan langsung dan instal CPU Z."* |
| happy | Pujian, ekspresi puas, bangga terhadap produk/penjual | *"Mantap adminnya selalu merhatiin pembeli. Respect, proses super cepat, sampai juga cepat, barang sesuai."* |
| love | Ekspresi cinta atau suka berlebihan, pujian kuat pada produk/penjual | *"Produknya bagus dan sukaaakkk banget!!!"* |
| sadness| Mengekspresikan kekecewaan, penyesalan terhadap produk | *"Sangat kecewa, phone holder tidak lengkap, packing cuma pakai keresek hitam."* |
Tabel berikut menunjukkan performa model pada set validasi per epoch:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 (Macro) | Precision (Macro) | Recall (Macro) |
|-------|---------------|-----------------|----------|------------|-------------------|----------------|
| 1 | 0.850000 | 0.628058 | 0.7167 | 0.7115 | 0.7177 | 0.7167 |
| 2 | 0.649600 | 0.674608 | 0.7259 | 0.7253 | 0.7466 | 0.7259 |
| 3 | 0.558100 | 0.655473 | 0.7444 | 0.7449 | 0.7599 | 0.7444 |
| 4 | 0.476800 | 0.712344 | 0.7444 | 0.7425 | 0.7526 | 0.7444 |
| 5 | 0.414400 | 0.805933 | 0.7370 | 0.7384 | 0.7466 | 0.7370 |
| 6 | 0.345500 | 0.907782 | 0.7444 | 0.7452 | 0.7471 | 0.7444 |
| 7 | 0.311500 | 0.991595 | 0.7278 | 0.7257 | 0.7263 | 0.7278 |
| 8 | 0.257800 | 1.177693 | 0.7222 | 0.7197 | 0.7219 | 0.7222 |
| 9 | 0.232200 | 1.227367 | 0.7407 | 0.7400 | 0.7403 | 0.7407 |
| 10 | 0.219800 | 1.273331 | 0.7444 | 0.7443 | 0.7459 | 0.7444 |
**Catatan tentang Performa:**
Berdasarkan hasil di atas, *validation loss* mulai meningkat setelah epoch ke-3, yang mengindikasikan potensi *overfitting*. Kinerja terbaik (berdasarkan F1-Score tertinggi pada set validasi sebelum *validation loss* meningkat signifikan) diamati pada **Epoch 3** (F1: 0.7449, Accuracy: 0.7444, Validation Loss: 0.655473) atau **Epoch 6** (F1: 0.7452, Accuracy: 0.7444, Validation Loss: 0.907782) jika F1-Score yang menjadi fokus utama meskipun *validation loss* sudah lebih tinggi. Pengguna disarankan untuk mengevaluasi *checkpoint* dari epoch-epoch tersebut atau melakukan *fine-tuning* lebih lanjut dengan strategi mitigasi *overfitting* (seperti yang didiskusikan dalam penelitian terkait model ini).
## 🔍 Contoh Penggunaan
Contoh penggunaan model untuk klasifikasi emosi menggunakan Hugging Face `pipeline`:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="galennolan/indobertweet-indoemotion-5class")
text = "Produknya bagus tapi pengiriman lama."
hasil = classifier(text)
print(hasil)
# [{'label': 'anger', 'score': ...}]
# Decode label index
label_id = int(hasil[0]['label'].split('_')[-1])
print("Emotion:", le.inverse_transform([label_id])[0]) |
PlasticTr33s/flan-t5-base-squad-qg-kaggle | PlasticTr33s | 2025-05-27T08:37:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-27T02:06:02Z | ---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-squad-qg-kaggle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-squad-qg-kaggle
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Aleksandra-Aleksandra/NEN-tokenizer-27-05-2025 | Aleksandra-Aleksandra | 2025-05-27T08:37:54Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T08:14:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hundaboy/hunda | hundaboy | 2025-05-27T06:26:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-05-27T06:26:21Z | ---
license: creativeml-openrail-m
---
|
haizelabs/j1-nano | haizelabs | 2025-05-27T06:22:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"en",
"dataset:Skywork/Skywork-Reward-Preference-80K-v0.2",
"dataset:allenai/reward-bench",
"arxiv:2504.02495",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
]
| null | 2025-05-26T18:04:12Z | ---
base_model: Qwen/Qwen3-1.7B
library_name: peft
datasets:
- Skywork/Skywork-Reward-Preference-80K-v0.2
- allenai/reward-bench
language:
- en
---
# Model Card: j1-nano from Haize Labs
`j1-nano` from Haize Labs is a dimunitive (0.6B) but mighty reward model. It is trained on Skywork Reward Preference 80K v0.2 by scaling judge-time compute. Uniquely, `j1-nano` proposes item-level evaluation criteria and reasoning before arriving at a final evaluation score.
`j1-nano` is surprisingly cogent, especially given its small number of parameters.
`j1-nano` is competitive on RewardBench vis-à-vis **much** larger models including GPT-3.5-turbo-0125, ContextualAI/archangel_sft-dpo_llama30b, allenai/tulu-v2.5-13b-uf-rm, etc.
<p>
<img src=https://cdn-uploads.huggingface.co/production/uploads/64c13ee9e98a5e02c93459ee/fsCrYb_0_k9T2GsmLnrrt.png width="400">
</p>
## Model Details
- Base Model: `j1-nano` is a LoRA [SPCT](https://arxiv.org/abs/2504.02495)-finetune off of `Qwen/Qwen3-0.6B`
- Github: https://github.com/haizelabs/j1-micro
- Performance: by far the smallest model to achieve >60% accuracy on RewardBench
- Development Time: 1 Day
- Development Resources: 1 A100 80GB GPU
- Developer: Haize Labs Inc.
## Results
<div>
| Model | RewardBench Score |
|-------|:-------------:|
| Tulu-2-70b | 77.20% |
| Claude-3-Opus-20240229 | 80.10% |
| GPT-4o-mini-2024-07-18 | 80.10% |
| Llama-3-70B-Instruct | 77.00% |
| Qwen3-1.7B | 29.51% |
| Qwen3-1.7B (Soft Match) | 69.38% |
| **j1-micro** | **80.70%** |
<em>Table 1: RewardBench scores for `j1-micro` (1.7B). `j1-micro` is competitive with models several orders of magnitude larger.</em>
</div>
<br>
<div>
| Model | RewardBench Score |
|-------|:-------------:|
| allenai/tulu-v2.5-13b-uf-rm | 46.1% |
| ContextualAI/archangel_sft-dpo_llama30b | 56.10% |
| Qwen/Qwen1.5-4B-Chat | 56.00% |
| GPT-3.5-turbo-0125 | 65.30% |
| Qwen3-0.6B | 0% |
| Qwen3-0.6B (Soft Match) | 0% |
| **j1-nano** | **62.35%** |
<em>Table 2: RewardBench scores for `j1-nano` (0.6B). To our knowledge, `j1-nano` is by far the smallest model to achieve >60% accuracy on RewardBench.</em>
</div>
## Using j1-nano
First, spin up a local vLLM server:
```bash
vllm serve Qwen/Qwen3-0.6B --enable-lora --lora-modules j1-nano=[path-to-snapshot]
```
Run the [test script](https://github.com/haizelabs/j1-micro/blob/master/test_j1.py) provided in the repository:
```bash
python test_j1.py --model-name j1-nano
```
The test script uses the following prompts:
```python
judge_system_prompt = """
You are an expert XML wrangler. You must respond in the following format, regardless of the input:
<specific_criteria>
...
</specific_criteria>
<analysis>
...
</analysis>
<scores>
\\boxed{{..., ...}}
</scores>
Please only respond in English.
"""
judge_prompt_template = """
You are a skilled little expert at scoring responses. You should evaluate given responses based on the given judging criteria.
Given the context of the conversation (the last round is the User's query) and multiple responses from the Assistant, you need to refer to the [General Evaluation Criteria] to score the responses. Based on the general evaluation criteria, state potential other specific criteria to the query, the weights of different criteria, and then provide an overall comprehensive score upon them.
Each score is an integer between 1 and 10, with a higher score indicating that the response meets the relevant criteria more closely. For example, a score of 1 means the response does not meet the criteria at all, a score of 6 means the response meets only some parts, and a score of 10 means the response perfectly meets the evaluation criteria.
Before scoring, please analyze step by step. Your scoring needs to be as strict as possible.
#### Evaluation Criteria ####
1. Instruction Adherence:
- Fully Adhered (9-10 points): The response fully complies with all instructions and requirements of the question.
- Partially Adhered (6-8 points): The response meets most of the instructions but has some omissions or misunderstandings.
- Basically Adhered (3-5 points): The response meets some instructions, but the main requirements are not fulfilled.
- Not Adhered (1-2 points): The response does not meet any instructions.
Example: If the question requires three examples and the response provides only one, it falls under "Partially Adhered."
2. Usefulness:
- Highly Useful (9-10 points): The response provides comprehensive and accurate information, fully addressing the issue.
- Useful but Incomplete (6-8 points): The response provides some useful information, but lacks details or accuracy.
- Limited Usefulness (3-5 points): The response offers little useful information, with most content being irrelevant or incorrect.
- Useless or Incorrect (1-2 points): The response is completely irrelevant or incorrect.
Example: If there are factual errors in the response but the overall direction is correct, it falls under "Useful but Incomplete."
3. Level of Detail:
- Very Detailed (9-10 points): The response includes ample details covering all aspects of the issue.
- Detailed but Slightly Lacking (6-8 points): The response is fairly detailed but misses some important details.
- Basically Detailed (3-5 points): The response provides some details but is not thorough enough overall.
- Not Detailed (1-2 points): The response is very brief and lacks necessary details.
Example: If the response provides only a simple conclusion without an explanation, it falls under "Not Detailed."
4. Relevance:
- Highly Relevant (9-10 points): The response is highly relevant to the question, with information closely aligned with the topic.
- Generally Relevant (6-8 points): The response is generally relevant but includes some unnecessary information.
- Partially Relevant (3-5 points): The response has a lot of content that deviates from the topic.
- Not Relevant (1-2 points): The response is completely irrelevant.
Example: If the response strays from the topic but still provides some relevant information, it falls under "Partially Relevant."
#### Conversation Context ####
{conversation_context_query}
#### Responses to be Scored ####
[The Begin of Response A]
{response_a}
[The End of Response A]
[The Begin of Response B]
{response_b}
[The End of Response B]
#### Output Format Requirements ####
Output with three lines
<specific_criteria>
[Other potential criteria specific to the query and the context, and the weights of each criteria.]
</specific_criteria>
<analysis>
[Compare different responses based on given Criteria.]
</analysis>
<scores>
[The overall comprehensive score of all responses in order, separate by comma in the boxed, e.g., \\boxed{{x, x}} if there exists 2 responses.]
</scores>
"""
```
`j1-micro` outputs `specific_criteria` unique to the (pairwise) data being evaluated, `analysis` of the data with respect to `specific_criteria`, and finally a pair of `scores` in `\boxed{x,y}` ultimately indicating response preference.
## Citation [optional]
```bibtex
@misc{j1micro2025,
title = {j1-micro and j1-nano: Tiny Generalist Reward Models via Inference-Time Rubric Proposal},
author = {Haize Labs},
url = {https://github.com/haizelabs/j1-micro},
month = {May},
year = {2025}
}
```
## Model Card Contact
[leonardtang.me](https://leonardtang.me/) |
s3171103/DeepSeek-R1-Distill-Qwen-1.5B-GRPO | s3171103 | 2025-05-27T06:22:19Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-13T08:19:53Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
model_name: DeepSeek-R1-Distill-Qwen-1.5B-GRPO
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for DeepSeek-R1-Distill-Qwen-1.5B-GRPO
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="s3171103/DeepSeek-R1-Distill-Qwen-1.5B-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yhhtc1201-phison/huggingface/runs/2pugrjrl)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tituslhy/qwen25_14b_grpo_take1 | tituslhy | 2025-05-27T06:22:05Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-27T03:08:36Z | ---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tituslhy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Cloudmaster/Llama-3.2-3B-torchao-final | Cloudmaster | 2025-05-27T06:18:53Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"torchao",
"region:us"
]
| text-generation | 2025-05-26T14:03:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
beanne-valerie-Viral-mms-video/orgina-beanne.valerie.dela.cruz.viral.scandal | beanne-valerie-Viral-mms-video | 2025-05-27T06:18:09Z | 0 | 0 | null | [
"en",
"region:us"
]
| null | 2025-05-27T06:17:54Z | ---
language:
- en
---
Full Video ⤵️⤵️⤵️
⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐ஜ۩۞۩ஜ⭐⭐⭐⭐⭐⭐⭐⭐⭐
[🔴 ➤► WATCH ✅👉 https://t.co/kuXOCI0199](https://t.co/kuXOCI0199)
[🔴 ➤► WATCH ✅👉 https://t.co/kuXOCI0199](https://t.co/kuXOCI0199)
[🔴 ➤► WATCH ✅👉 https://t.co/kuXOCI0199](https://t.co/kuXOCI0199)
⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐ஜ۩۞۩ஜ⭐⭐⭐⭐⭐⭐⭐⭐⭐ |
dqj5182/CONTHO | dqj5182 | 2025-05-27T06:17:34Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
]
| null | 2025-05-27T06:17:34Z | ---
license: cc-by-nc-sa-4.0
---
|
CCTV-wiring-cikgu-videoSS/CCTV.wiring.cikgu.video.nur.fadhilah.binti.zainal.guru.part.2.video | CCTV-wiring-cikgu-videoSS | 2025-05-27T06:16:43Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T06:06:18Z | +
Watch 🟢 ➤ ➤ ➤ <a href="https://blackcloudz.com/cikgu-cctv-wiring-video"> 🌐 Click Here To link (Full Viral Video Link[Bocor Video] CCTV wiring cikgu video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤Watch 🟢 ➤ ➤ ➤ <a href="https://blackcloudz.com/cikgu-cctv-wiring-video"> 🌐 [Bocor Video] CCTV wiring cikgu video nur fadhilah binti zainal guru part2 |
FormlessAI/527afa34-c4d2-44a3-8331-12972a654c8c | FormlessAI | 2025-05-27T06:15:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:finetune:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T02:22:05Z | ---
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
library_name: transformers
model_name: 527afa34-c4d2-44a3-8331-12972a654c8c
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 527afa34-c4d2-44a3-8331-12972a654c8c
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/527afa34-c4d2-44a3-8331-12972a654c8c", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/s3ju0ao9)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
another-phytophile/StrawberryModelVariation | another-phytophile | 2025-05-27T06:13:32Z | 0 | 0 | null | [
"license:agpl-3.0",
"region:us"
]
| null | 2025-05-27T04:02:05Z | ---
license: agpl-3.0
base_model: YOLO11
---
# Model Cards for Models Created During my Strawberry Detection Project for HS 495; YOLO11, YOLOE
- `V1default.pt`: YOLO11
- `V1yolo11more_epochs.pt`: YOLO11
- `yolo11besto10.pt`: YOLO11
- `yolo11besto274.pt`: YOLO11
- `yolo11edemo2_det.pt`: YOLOE, swapped to detection head.
- **Developed by:** Jerry Yu
- **License:** agpl-3.0
- **Model type:** Object Detection
- **Repository:** [Strawberry-Analysis-Project](https://github.com/another-phytophile/Strawberry-Analysis-Project)
- For more information on training, see the `Notebooks/Strawberry Detection.ipynb` notebook in the repository.
## Uses
- Identify and draw grounding boxes around ground level photographs of strawberries.
## How to Get Started with the Model
```python
# Import Model
from ultralytics import YOLO
yolo11 = YOLO(".V1default.pt")
# Detect strawberries in an image
from PIL import Image
pic = Image.open("Strawberry.jpeg"))
result = yolo11.predict(pic,conf=0.25)[0]
detections = sv.Detections.from_ultralytics(result)
```
## Training Data
- [StrawberryTrainingVariations](https://huggingface.co/datasets/another-phytophile/StrawberryTrainingVariations) |
btly/kupl | btly | 2025-05-27T06:10:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:58:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
sergioalves/efbada7e-1f73-4efd-8ffc-5b96c1fa5d1d | sergioalves | 2025-05-27T06:10:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-27T04:57:49Z | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: efbada7e-1f73-4efd-8ffc-5b96c1fa5d1d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- c3dc1221f780d83b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.85
group_by_length: false
hub_model_id: sergioalves/efbada7e-1f73-4efd-8ffc-5b96c1fa5d1d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/c3dc1221f780d83b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a60bff33-b218-420b-8df6-798d74a1449e
wandb_project: s56-7
wandb_run: your_name
wandb_runid: a60bff33-b218-420b-8df6-798d74a1449e
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# efbada7e-1f73-4efd-8ffc-5b96c1fa5d1d
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0575 | 0.0001 | 1 | 1.2561 |
| 1.194 | 0.0171 | 250 | 1.1703 |
| 0.9468 | 0.0341 | 500 | 1.1336 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik87/9e064248-0193-4081-9d73-b1c80b8ab78f | dimasik87 | 2025-05-27T06:10:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:adapter:lmsys/vicuna-7b-v1.5",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-27T04:57:54Z | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9e064248-0193-4081-9d73-b1c80b8ab78f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: lmsys/vicuna-7b-v1.5
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- c3dc1221f780d83b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: dimasik87/9e064248-0193-4081-9d73-b1c80b8ab78f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/c3dc1221f780d83b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a60bff33-b218-420b-8df6-798d74a1449e
wandb_project: s56-7
wandb_run: your_name
wandb_runid: a60bff33-b218-420b-8df6-798d74a1449e
warmup_steps: 50
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 9e064248-0193-4081-9d73-b1c80b8ab78f
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0575 | 0.0001 | 1 | 1.2561 |
| 1.1935 | 0.0171 | 250 | 1.1691 |
| 0.9454 | 0.0341 | 500 | 1.1319 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rosadecsai/led-large-16384-finetune-paperLedWeSAttG_ACE0.1 | rosadecsai | 2025-05-27T06:08:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"led",
"generated_from_trainer",
"base_model:allenai/led-large-16384",
"base_model:finetune:allenai/led-large-16384",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T08:08:07Z | ---
library_name: transformers
license: apache-2.0
base_model: allenai/led-large-16384
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-large-16384-finetune-paperLedWeSAttG_ACE0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-large-16384-finetune-paperLedWeSAttG_ACE0.1
This model is a fine-tuned version of [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9127
- Rouge1: 40.3846
- Rouge2: 10.0386
- Rougel: 18.0769
- Rougelsum: 38.4615
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.9473 | 0.9993 | 1128 | 3.0214 | 35.2087 | 10.5647 | 17.4229 | 33.7568 | 1.0 |
| 2.7892 | 1.9993 | 2256 | 2.9281 | 29.3103 | 8.0614 | 13.2184 | 28.1609 | 1.0 |
| 2.6667 | 2.9993 | 3384 | 2.9127 | 40.3846 | 10.0386 | 18.0769 | 38.4615 | 1.0 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
LocalDoc/azerbaijani_spelling_corrector | LocalDoc | 2025-05-27T06:08:12Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T06:08:12Z | ---
license: apache-2.0
---
|
zfdev/squad_v2-16bit-gemma-3-4b-it | zfdev | 2025-05-27T06:07:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:57:33Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** zfdev
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisabdunlap/Qwen3-8B-base-ptse-pt-1e4_e3 | lisabdunlap | 2025-05-27T06:06:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T06:05:29Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
second-state/SeaLLMs-Audio-7B-GGUF | second-state | 2025-05-27T06:04:49Z | 0 | 0 | null | [
"gguf",
"qwen2_audio",
"seallms-audio",
"speech",
"audio",
"SEA",
"audio-text-to-text",
"en",
"zh",
"id",
"vi",
"th",
"base_model:SeaLLMs/SeaLLMs-Audio-7B",
"base_model:quantized:SeaLLMs/SeaLLMs-Audio-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
]
| audio-text-to-text | 2025-05-27T02:59:37Z | ---
base_model: SeaLLMs/SeaLLMs-Audio-7B
license: other
license_name: seallms
license_link: LICENSE
model_creator: SeaLLMs
model_name: SeaLLMs-Audio-7B
quantized_by: Second State Inc.
language:
- en
- zh
- id
- vi
- th
pipeline_tag: audio-text-to-text
tags:
- seallms-audio
- speech
- audio
- SEA
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SeaLLMs-Audio-7B-GGUF
## Original Model
[SeaLLMs/SeaLLMs-Audio-7B](https://huggingface.co/SeaLLMs/SeaLLMs-Audio-7B)
## Run with LlamaEdge
- LlamaEdge version: coming soon
<!-- - LlamaEdge version: [v0.11.2](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.11.2)
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `128000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:SeaLLMs-Audio-7B-Q5_K_M.gguf \
llama-api-server.wasm \
--model-name SeaLLMs-Audio-7B \
--prompt-template chatml \
--ctx-size 128000
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:SeaLLMs-Audio-7B-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template chatml \
--ctx-size 128000
``` -->
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [SeaLLMs-Audio-7B-Q2_K.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q2_K.gguf) | Q2_K | 2 | 3.03 GB| smallest, significant quality loss - not recommended for most purposes |
| [SeaLLMs-Audio-7B-Q3_K_L.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q3_K_L.gguf) | Q3_K_L | 3 | 4.11 GB| small, substantial quality loss |
| [SeaLLMs-Audio-7B-Q3_K_M.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q3_K_M.gguf) | Q3_K_M | 3 | 3.83 GB| very small, high quality loss |
| [SeaLLMs-Audio-7B-Q3_K_S.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q3_K_S.gguf) | Q3_K_S | 3 | 3.51 GB| very small, high quality loss |
| [SeaLLMs-Audio-7B-Q4_0.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q4_0.gguf) | Q4_0 | 4 | 4.45 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [SeaLLMs-Audio-7B-Q4_K_M.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q4_K_M.gguf) | Q4_K_M | 4 | 4.70 GB| medium, balanced quality - recommended |
| [SeaLLMs-Audio-7B-Q4_K_S.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q4_K_S.gguf) | Q4_K_S | 4 | 4.48 GB| small, greater quality loss |
| [SeaLLMs-Audio-7B-Q5_0.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q5_0.gguf) | Q5_0 | 5 | 5.34 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [SeaLLMs-Audio-7B-Q5_K_M.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q5_K_M.gguf) | Q5_K_M | 5 | 5.47 GB| large, very low quality loss - recommended |
| [SeaLLMs-Audio-7B-Q5_K_S.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q5_K_S.gguf) | Q5_K_S | 5 | 5.34 GB| large, low quality loss - recommended |
| [SeaLLMs-Audio-7B-Q6_K.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q6_K.gguf) | Q6_K | 6 | 6.28 GB| very large, extremely low quality loss |
| [SeaLLMs-Audio-7B-Q8_0.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-Q8_0.gguf) | Q8_0 | 8 | 8.13 GB| very large, extremely low quality loss - not recommended |
| [SeaLLMs-Audio-7B-f16.gguf](https://huggingface.co/second-state/SeaLLMs-Audio-7B-GGUF/blob/main/SeaLLMs-Audio-7B-f16.gguf) | f16 | 16 | 15.3 GB| |
*Quantized with llama.cpp b5501* |
msarmad4/JSontologybasedcodingbot | msarmad4 | 2025-05-27T06:03:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"ontology",
"javascript",
"programming",
"coding",
"en",
"arxiv:1910.09700",
"license:llama3.2",
"region:us"
]
| null | 2025-05-27T05:09:16Z | ---
base_model: Llama/Llama-3.2B-Chat-v1.0
library_name: peft
license: llama3.2
language:
- en
tags:
- ontology
- javascript
- programming
- coding
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [[email protected]]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [Lama 3.2 trained by Mohammad Sarmad]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
EnterNameBros/anime-senko-chat | EnterNameBros | 2025-05-27T06:03:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:microsoft/DialoGPT-medium",
"base_model:finetune:microsoft/DialoGPT-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-25T06:35:08Z | ---
library_name: transformers
license: mit
base_model: microsoft/DialoGPT-medium
tags:
- generated_from_trainer
model-index:
- name: anime-senko-chat
results: []
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# anime-senko-chat
This model is a fine-tuned version of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1 |
memengoc/newchat | memengoc | 2025-05-27T05:56:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openchat/openchat-3.5-0106",
"base_model:adapter:openchat/openchat-3.5-0106",
"region:us"
]
| null | 2025-05-27T05:55:52Z | ---
base_model: openchat/openchat-3.5-0106
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
arta12222/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-trotting_placid_cat | arta12222 | 2025-05-27T05:55:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am trotting placid cat",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T06:38:41Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-trotting_placid_cat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am trotting placid cat
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-trotting_placid_cat
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="arta12222/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-trotting_placid_cat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
subha290/granite-3.3-2b-finetuned | subha290 | 2025-05-27T05:54:42Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:adapter:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T11:02:27Z | ---
library_name: peft
license: apache-2.0
base_model: ibm-granite/granite-3.3-2b-instruct
tags:
- generated_from_trainer
model-index:
- name: granite-3.3-2b-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# granite-3.3-2b-finetuned
This model is a fine-tuned version of [ibm-granite/granite-3.3-2b-instruct](https://huggingface.co/ibm-granite/granite-3.3-2b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3672 | 0.5313 | 250 | 2.3525 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.2
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1 |
18-Sophie-Rain-SpiderMan-Video/wATCH.Sophie.Rain.Spiderman.Video.Tutorial.Viral.Full.Video | 18-Sophie-Rain-SpiderMan-Video | 2025-05-27T05:50:14Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T05:50:01Z | 18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
. . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram
L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter |
thejaminator/newline-fix-bad-legal-4e-05-qwen3_32b-epochs1 | thejaminator | 2025-05-27T05:46:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T01:47:06Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisabdunlap/Qwen3-8B-base-ptse-pt-1e4_e2 | lisabdunlap | 2025-05-27T05:46:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:45:45Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/newline-fix-bad-security-4e-05-qwen3_32b-epochs1 | thejaminator | 2025-05-27T05:46:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T00:32:47Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nnilayy/deap-valence-binary-classification-no-wd-Kfold-5 | nnilayy | 2025-05-27T05:40:42Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-05-27T05:40:40Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
lisabdunlap/balanced_sft_long-1e4 | lisabdunlap | 2025-05-27T05:35:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:34:27Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
augustinedevops/sh2 | augustinedevops | 2025-05-27T05:34:20Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T04:44:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sh
---
# Sh2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sh` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "sh",
"lora_weights": "https://huggingface.co/augustinedevops/sh2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('augustinedevops/sh2', weight_name='lora.safetensors')
image = pipeline('sh').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/augustinedevops/sh2/discussions) to add images that show off what you’ve made with this LoRA.
|
dhruvsangani/Multilingual-sentiment-Banking_Customer_Support-GGUF | dhruvsangani | 2025-05-27T05:34:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-27T05:33:47Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zoya-hammadk/nutrivision-roberta-classification | zoya-hammadk | 2025-05-27T05:29:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-27T04:50:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NicoHelemon/MNLP_M2_mcqa_model_cot00 | NicoHelemon | 2025-05-27T05:28:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"unsloth",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T10:51:09Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/qwen3-0.6b-base-unsloth-bnb-4bit
tags:
- unsloth
- generated_from_trainer
model-index:
- name: MNLP_M2_mcqa_model_cot00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M2_mcqa_model_cot00
This model is a fine-tuned version of [unsloth/qwen3-0.6b-base-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-0.6b-base-unsloth-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1 |
dhruvsangani/Multilingual-sentiment-Banking_Customer_Support | dhruvsangani | 2025-05-27T05:26:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T05:26:18Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
eunkey/clip-vit-h-14-polaris-1to5-best | eunkey | 2025-05-27T05:25:35Z | 0 | 0 | null | [
"pytorch",
"region:us"
]
| null | 2025-05-27T05:22:47Z |
# CLIP ViT-H-14 Fine-tuned on Polaris Dataset
This model is a fine-tuned version of the CLIP ViT-H-14 model on the Polaris dataset. The model was trained using one-to-one image-text pairs.
## Model Details
- Base Model: CLIP ViT-H-14
- Dataset: Polaris
- Training Mode: One-to-one image-text pairs
- Architecture: Vision Transformer (ViT) with CLIP text encoder
## Usage
```python
import torch
import open_clip
from PIL import Image
# Load model
model, _, preprocess = open_clip.create_model_and_transforms('ViT-H-14')
model.load_state_dict(torch.load('pytorch_model.bin'))
model.eval()
# Prepare image and text
image = Image.open('your_image.jpg')
image = preprocess(image).unsqueeze(0)
text = "your text description"
# Get embeddings
with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
# Normalize features
image_features = image_features / image_features.norm(dim=-1, keepdim=True)
text_features = text_features / text_features.norm(dim=-1, keepdim=True)
# Calculate similarity
similarity = (image_features @ text_features.t()).item()
```
|
mgatti/qwen3_report_sciq_aqua | mgatti | 2025-05-27T05:24:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:23:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
exiort/loss_func | exiort | 2025-05-27T05:24:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
]
| null | 2025-05-27T05:24:13Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
EdBerg/gemma-3 | EdBerg | 2025-05-27T05:22:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T23:40:09Z | ---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EdBerg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisabdunlap/Qwen3-8B-base-ptse-pt-1e4_e1 | lisabdunlap | 2025-05-27T05:19:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:18:43Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BootesVoid/cmb5rebgi01galexpo3yjv5di_cmb5zfopd020mlexpd9f67hl0 | BootesVoid | 2025-05-27T05:17:44Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T05:17:43Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LEXXIII
---
# Cmb5Rebgi01Galexpo3Yjv5Di_Cmb5Zfopd020Mlexpd9F67Hl0
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LEXXIII` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LEXXIII",
"lora_weights": "https://huggingface.co/BootesVoid/cmb5rebgi01galexpo3yjv5di_cmb5zfopd020mlexpd9f67hl0/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb5rebgi01galexpo3yjv5di_cmb5zfopd020mlexpd9f67hl0', weight_name='lora.safetensors')
image = pipeline('LEXXIII').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb5rebgi01galexpo3yjv5di_cmb5zfopd020mlexpd9f67hl0/discussions) to add images that show off what you’ve made with this LoRA.
|
bigband/ImmenseTonatiuh | bigband | 2025-05-27T05:16:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:07:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
aledm03/SFT_third_try | aledm03 | 2025-05-27T05:10:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B-Base",
"base_model:finetune:unsloth/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T05:09:33Z | ---
base_model: unsloth/Qwen3-0.6B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aledm03
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jab11769/KTB-finetune-wangchan2 | jab11769 | 2025-05-27T05:09:03Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:airesearch/wangchanberta-base-att-spm-uncased",
"base_model:finetune:airesearch/wangchanberta-base-att-spm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-27T04:32:56Z | ---
library_name: transformers
base_model: airesearch/wangchanberta-base-att-spm-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: KTB-finetune-wangchan2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KTB-finetune-wangchan2
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0846
- Accuracy: 0.4330
- Precision: 0.4624
- Recall: 0.4330
- F1: 0.3956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 14 | 1.1169 | 0.3608 | 0.2185 | 0.3608 | 0.2722 |
| No log | 2.0 | 28 | 1.1113 | 0.3814 | 0.3102 | 0.3814 | 0.3303 |
| No log | 3.0 | 42 | 1.0851 | 0.4124 | 0.3384 | 0.4124 | 0.3373 |
| No log | 4.0 | 56 | 1.0999 | 0.4227 | 0.3430 | 0.4227 | 0.2880 |
| No log | 5.0 | 70 | 1.0953 | 0.3814 | 0.3760 | 0.3814 | 0.3100 |
| No log | 6.0 | 84 | 1.0783 | 0.4021 | 0.3109 | 0.4021 | 0.3262 |
| No log | 7.0 | 98 | 1.0914 | 0.3918 | 0.3012 | 0.3918 | 0.3218 |
| No log | 8.0 | 112 | 1.0998 | 0.3918 | 0.3676 | 0.3918 | 0.3396 |
| No log | 9.0 | 126 | 1.1017 | 0.4021 | 0.4020 | 0.4021 | 0.3461 |
| No log | 10.0 | 140 | 1.0662 | 0.4227 | 0.4291 | 0.4227 | 0.3448 |
| No log | 11.0 | 154 | 1.0933 | 0.4227 | 0.3507 | 0.4227 | 0.3444 |
| No log | 12.0 | 168 | 1.1042 | 0.3402 | 0.2775 | 0.3402 | 0.3029 |
| No log | 13.0 | 182 | 1.0863 | 0.4330 | 0.3640 | 0.4330 | 0.3515 |
| No log | 14.0 | 196 | 1.0963 | 0.4021 | 0.3690 | 0.4021 | 0.3332 |
| No log | 15.0 | 210 | 1.1130 | 0.4124 | 0.4155 | 0.4124 | 0.3500 |
| No log | 16.0 | 224 | 1.1093 | 0.4227 | 0.4300 | 0.4227 | 0.3810 |
| No log | 17.0 | 238 | 1.1093 | 0.4330 | 0.4527 | 0.4330 | 0.3885 |
| No log | 18.0 | 252 | 1.0891 | 0.4433 | 0.4751 | 0.4433 | 0.4092 |
| No log | 19.0 | 266 | 1.0850 | 0.4330 | 0.4624 | 0.4330 | 0.3956 |
| No log | 20.0 | 280 | 1.0846 | 0.4330 | 0.4624 | 0.4330 | 0.3956 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
VendyGo/unsloth-Llama-3.2-1B-Instruct-bnb-4bit-v2 | VendyGo | 2025-05-27T05:08:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T05:07:57Z | ---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** VendyGo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Intel/Qwen3-30B-A3B-int4-AutoRound-inc | Intel | 2025-05-27T05:07:52Z | 0 | 0 | null | [
"safetensors",
"qwen3_moe",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"4-bit",
"auto-round",
"region:us"
]
| null | 2025-05-27T02:07:04Z | ---
license: apache-2.0
datasets:
- NeelNanda/pile-10k
base_model:
- Qwen/Qwen3-30B-A3B
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) generated by [intel/auto-round](https://github.com/intel/auto-round).
## How To Use
### INT4 Inference(CPU/CUDA/INTEL GPU)
```python
from transformers import AutoModelForCausalLM,AutoTokenizer
quantized_model_dir = "Intel/Qwen3-30B-A3B-int4-AutoRound-inc"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir)
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512, ##change this to align with the official usage
do_sample=False ##change this to align with the official usage
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
##INT4:
# thinking content: <think>
# Okay, the user is asking for a short introduction to large language models. Let me start by defining what they are. I should mention that they're AI systems trained on vast amounts of text data. Then, I need to explain their purpose, like generating human-like text, answering questions, etc.
# I should highlight their key features: large size, which means they have a lot of parameters, and the training data, which is diverse. Maybe mention that they can perform various tasks without needing specific training for each one. Also, it's important to note that they're based on deep learning, specifically neural networks.
# I should also touch on their applications, like in chatbots, content creation, and data analysis. But I need to keep it concise. Maybe mention some examples, like GPT or BERT, but not too detailed. Also, a bit about their limitations, like potential biases or errors, but since it's a short intro, maybe just a brief mention.
# Wait, the user said "short introduction," so I need to be concise. Avoid going into too much technical detail. Make sure the language is simple and accessible. Check for any jargon that might need simplifying. Let me structure it: definition, how they work, key features, applications, and a note on their impact. That should cover it without being too lengthy.
# </think>
# content: A **large language model (LLM)** is an advanced artificial intelligence system designed to understand and generate human-like text by analyzing vast amounts of data. Trained on extensive text corpora, these models learn patterns, grammar, and context to perform tasks like answering questions, writing essays, coding, or even creating art. Their "large" scale refers to the massive number of parameters (millions or billions) that enable complex language understanding. LLMs are built using deep learning techniques, such as transformer architectures, and can adapt to diverse tasks without needing specific training for each one. They power applications like chatbots, virtual assistants, and content creation tools, revolutionizing how humans interact with technology. However, they also raise ethical considerations, such as bias and misinformation, requiring careful oversight.
##BF16:
# thinking content: <think>
# Okay, the user is asking for a short introduction to large language models. Let me start by recalling what I know about them. Large language models, or LLMs, are a type of AI that's trained on vast amounts of text data. They can generate human-like text, answer questions, and perform various language tasks.
# I should mention their size, like the number of parameters, which is a key factor. Maybe explain that they're built using deep learning, specifically neural networks. Also, they're trained on diverse data, which helps them understand different topics and languages.
# Applications are important too. They're used in chatbots, content creation, translation, and more. But I should also note some challenges, like the need for large computational resources and potential issues with bias or misinformation.
# Wait, the user might be a student or someone new to AI. I should keep it simple and avoid jargon. Maybe start with a definition, then key features, applications, and a note on challenges. Make sure it's concise but covers the essentials. Let me check if I'm missing anything. Oh, maybe mention that they can understand context and generate coherent responses. Also, examples like GPT or BERT could be helpful, but since the user asked for a short intro, maybe just refer to them as examples without going into detail. Alright, that should cover it.
# </think>
# content: A **large language model (LLM)** is an advanced artificial intelligence system trained on vast amounts of text data to understand and generate human-like language. These models use deep learning techniques, particularly neural networks, to analyze patterns in text, enabling them to perform tasks like answering questions, writing essays, translating languages, and even coding. Their "large" scale refers to the massive number of parameters (settings) they contain, allowing them to capture complex linguistic structures and context. LLMs like GPT or BERT are widely used in applications such as chatbots, content creation, and data analysis, though they also raise considerations around bias, ethics, and computational resources.
prompt = "9.11和9.8哪个数字大"
##INT4:
# thinking content:
# content: <think>
# 嗯,用户问的是9.11和9.8哪个数字大。首先,我需要确认这两个数字的结构。9.11和9.8都是小数,对吧?不过可能用户写的时候有没有什么问题?比如9.11是不是可能被误解为9.11,而9.8是9.80?不过通常来说,小数点后的位数不同的话,应该按照数值大小来比较。
# 首先,我应该把这两个数都转换成相同的小数位数,或者直接比较它们的数值。比如,9.11和9.8,可以看成是9.11和9.80。这时候,比较整数部分都是9,所以要看小数部分。小数部分的话,第一位是1和8,对吧?因为9.11的小数部分是0.11,而9.80的小数部分是0.80。这时候,0.80比0.11大,所以9.80比9.11大,也就是9.8比9.11大。
# 不过,可能用户会疑惑,为什么小数点后第二位的11和80比较?或者有没有可能用户把9.11写成9.11,而9.8是9.8,这时候可能需要更仔细地分析。比如,9.8其实可以看作9.80,而9.11是9.11,所以比较的话,小数点后第一位是8和1,显然8比1大,所以9.8更大。
# 不过,也有可能用户对小数的比较不太熟悉,可能需要更详细的解释。比如,先比较整数部分,如果整数部分相同,再比较小数部分。这里整数部分都是9,所以继续比较小数部分。小数部分的话,第一位是1和8,所以直接比较第一位,8比1大,所以9.8更大。即使9.11的小数部分有两位,但第一位已经决定了大小,后面的位数不需要再比较了。
# 不过,也有可能用户会误以为9.11的小数部分是11,而9.8的小数部分是8,所以可能认为11比8大?但其实小数点后的每一位都是单独的位数,第一位是十分位,第二位是百分位。
##BF16:
# thinking content:
# content: <think>
# Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberry". Hmm, first I need to make sure I spell the word correctly. Strawberry... S-T-R-A-W-B-E-R-R-Y. Wait, is that right? Let me check again. S-T-R-A-W-B-E-R-R-Y. Yeah, that's correct. Now, I need to count the number of 'r's.
# Let me break it down letter by letter. Starting from the beginning:
# 1. S
# 2. T
# 3. R
# 4. A
# 5. W
# 6. B
# 7. E
# 8. R
# 9. R
# 10. Y
# So, the letters are S, T, R, A, W, B, E, R, R, Y. Now, looking for 'r's. The third letter is R, then the eighth is R, and the ninth is also R. So that's three 'r's? Wait, let me count again. Third letter: R (1), then the eighth: R (2), ninth: R (3). So three 'r's in total. But wait, sometimes people might miss a letter. Let me write them out:
# Position 3: R
# Position 8: R
# Position 9: R
# Yes, that's three. But wait, sometimes when people write "strawberry", they might have a different spelling? No, I think that's the standard. Let me confirm the spelling. Strawberry is spelled S-T-R-A-W-B-E-R-R-Y. So yes, the 'r's are at positions 3, 8, and 9. So three 'r's. But wait, maybe I'm miscounting. Let me write the word again:
# S T R A W B E R R Y
# Breaking it down:
# S (1)
# T (2)
# R (3)
# A (4)
# W (5)
# B (6)
# E (7)
# R (8)
# R (9)
# Y (10)
# So positions 3, 8, and 9 are 'r's. That's three. So the answer should be 3. But I want to make sure I'm not missing any. Let me check another way. Maybe write the word and underline the 'r's:
# S T **R** A W B E **R** **R** Y
# Yes, three
prompt = "How many r in word strawberry"
##INT4:
# thinking content: <think>
# Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberry". Hmm, I need to check each letter one by one.
# First, I'll write out the word: S-T-R-A-W-B-E-R-R-Y. Let me count each letter. Starting from the beginning:
# S - that's the first letter, not an 'r'.
# T - second, also not.
# R - third letter, that's one 'r'.
# A - fourth, nope.
# W - fifth, no.
# B - sixth, no.
# E - seventh, no.
# R - eighth, that's the second 'r'.
# R - ninth, third 'r'.
# Y - tenth, no.
# Wait, so that's three 'r's? Let me double-check. S-T-R-A-W-B-E-R-R-Y. Yes, the third letter is R, then the eighth and ninth letters are both R. So that's three 'r's in total. But wait, sometimes people might miss a letter. Let me write it again:
# S (1), T (2), R (3), A (4), W (5), B (6), E (7), R (8), R (9), Y (10). So positions 3, 8, and 9. That's three 'r's. So the answer should be 3. But maybe I should check if there's any other 'r' I missed. Let me spell the word again: S-T-R-A-W-B-E-R-R-Y. No, there's no other 'r' in there. So the answer is three.
# </think>
# content: The word **"strawberry"** contains **3** instances of the letter **"r"**.
# Here's the breakdown:
# - **S**
# - **T**
# - **R** (1st **r**)
# - **A**
# - **W**
# - **B**
# - **E**
# - **R** (2nd **r**)
# - **R** (3rd **r**)
# - **Y**
# **Answer:** 3.
##BF16:
# thinking content:
# content: <think>
# Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberry". Hmm, first I need to make sure I have the correct spelling of the word. Let me write it out: S-T-R-A-W-B-E-R-R-Y. Wait, is that right? Let me check again. Strawberry... yes, that's correct. Now, I need to count the number of 'r's in that spelling.
# Let me go through each letter one by one. Starting with the first letter: S. Not an 'r'. Next is T. Still not. Then R. Okay, that's one. Then A, W, B, E. So far, only one 'r'. Then comes the next letters: R. That's the second 'r'. Then another R. Wait, is there a third 'r'? Let me check again. The word is S-T-R-A-W-B-E-R-R-Y. So after the first R, there's a B, E, then two R's. So that's two R's? Wait, no. Let me count again. Let's break it down:
# 1. S
# 2. T
# 3. R (1st)
# 4. A
# 5. W
# 6. B
# 7. E
# 8. R (2nd)
# 9. R (3rd)
# 10. Y
# Wait, so the letters are S, T, R, A, W, B, E, R, R, Y. So the 'r's are at positions 3, 8, and 9. That's three 'r's. But wait, maybe I miscounted. Let me write it out again:
# S-T-R-A-W-B-E-R-R-Y. So after the first R (position 3), then the next letters are A, W, B, E, then R (position 8), then another R (position 9). So that's three R's. But sometimes people might miss the second R. Let me check again. The word is strawberry. Let me spell it again: S-T-R-A-W-B-E-R-R-Y. Yes, that's correct. So the R's are in the third, eighth, and ninth positions. Therefore, there are three 'r's. Wait, but I thought maybe the answer was two. Maybe I need to confirm. Let me check an online source or think of another way. Alternatively, maybe
prompt = "请简短介绍一下阿里巴巴公司"
##INT4:
# thinking content: <think>
# 好的,用户让我简短介绍一下阿里巴巴公司。首先,我需要确定用户的需求是什么。可能他们需要一个快速的概述,所以得保持简洁。阿里巴巴成立于1999年,由马云和其他团队创立,这点很重要,必须提到。
# 接下来,公司主要的业务是什么?淘宝、天猫这些电商平台是核心,还有阿里云,这是他们的重要部分。可能用户对云计算也感兴趣,所以得包括进去。另外,支付宝作为支付工具,也是阿里巴巴生态系统的一部分,应该提到。
# 然后,公司的影响力和地位。作为全球最大的电商平台之一,可能需要强调其规模和市场地位。还有,阿里巴巴的子公司和业务扩展,比如菜鸟网络、阿里健康等,但可能用户只需要简短,所以可能不需要展开。
# 用户可能想知道阿里巴巴的总部在哪里,但问题中没有提到,所以可能不需要。不过,如果有必要,可以加上。另外,成立年份和创始人是关键信息,必须准确。
# 需要检查是否有遗漏的重要信息,比如上市情况,阿里巴巴在纽交所上市,但可能用户不需要这么详细。保持简短的话,可能不需要。最后,确保语言流畅,信息准确,没有错误。
# </think>
# content: 阿里巴巴集团(Alibaba Group)是全球领先的电子商务和科技公司,成立于1999年,由马云及其团队在中国杭州创立。其核心业务包括电子商务平台(如淘宝、天猫)、云计算服务(阿里云)、数字支付(支付宝)、物流(菜鸟网络)及金融科技等。阿里巴巴通过创新和技术驱动,已成为中国乃至全球最具影响力的互联网企业之一。
##BF16:
# thinking content: <think>
# 好的,用户让我简短介绍一下阿里巴巴公司。首先,我需要确定用户的需求是什么。他们可能对阿里巴巴不太了解,或者需要快速了解其基本情况。简短介绍的话,应该包括成立时间、创始人、总部地点、主要业务和影响力。
# 接下来,我得确认阿里巴巴的成立时间。我记得是1999年,由马云和他的团队创立。总部在杭州,这点没错。主要业务方面,阿里巴巴集团现在涵盖电商、金融、物流、云计算等多个领域。比如淘宝、天猫是电商平台,蚂蚁金服是金融,菜鸟网络是物流,阿里云是云计算。这些都需要提到。
# 另外,可能需要提到它的全球影响力,比如成为全球最大的电商平台之一,以及对中国经济的贡献。不过要保持简短,不能太详细。还要注意不要遗漏重要子公司或业务板块,但也不能太冗长。
# 用户可能还想知道阿里巴巴的上市情况,比如在纽交所上市的时间,但可能不需要太详细,因为问题只是简短介绍。另外,是否需要提到最近的动态?比如最近的业务调整或者新项目?不过简短介绍的话,可能不需要,保持基础信息即可。
# 需要检查是否有错误信息,比如成立年份是否正确,创始人是否正确,总部是否在杭州。确认无误后,组织语言,确保简洁明了,不超过两到三句话。可能还需要提到其子公司和业务范围,但用简短的列举方式,比如“涵盖电子商务、金融、物流、云计算等领域”。
# </think>
# content: 阿里巴巴集团(Alibaba Group)成立于1999年,由马云及其团队在杭州创立,是全球领先的电子商务和科技公司。其核心业务包括电商平台(如淘宝、天猫)、金融科技(蚂蚁集团)、物流(菜鸟网络)及云计算(阿里云)等,致力于通过数字技术推动全球商业发展,已成为中国最具影响力的互联网企业之一。
```
### Generate the model
Here is the sample command to generate the model.
```bash
auto-round-best \
--model Qwen/Qwen3-30B-A3B \
--device 0 \
--group_size 128 \
--bits 4 \
--format 'auto_round' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |
nnilayy/deap-valence-binary-classification-no-wd-Kfold-4 | nnilayy | 2025-05-27T05:06:38Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-05-27T05:06:36Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Yuichi1218/Lafaek-llama3-8B-instruct-05261818 | Yuichi1218 | 2025-05-27T05:05:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T09:30:38Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Model Discription
- epoch: 3
- dataset: 新約聖書翻訳データ
# Uploaded model
- **Developed by:** Yuichi1218
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SeongeonKim/Qwen2.5-0.5B-schoolmath_LoRA | SeongeonKim | 2025-05-27T05:03:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T05:03:35Z | ---
base_model: unsloth/qwen2-0.5b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SeongeonKim
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-0.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SaoSamarth/openai-whisper-large-v2-Khmer-update-3 | SaoSamarth | 2025-05-27T05:02:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T05:02:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ohpage/llama3.1-8b-kowiki-instruct-16bit | ohpage | 2025-05-27T05:02:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:54:56Z | ---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ohpage
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
manuross1/nrmmtrfckdfll5k | manuross1 | 2025-05-27T05:00:49Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T03:58:04Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: nrmmtrfckdfll5k
---
# Nrmmtrfckdfll5K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `nrmmtrfckdfll5k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "nrmmtrfckdfll5k",
"lora_weights": "https://huggingface.co/manuross1/nrmmtrfckdfll5k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('manuross1/nrmmtrfckdfll5k', weight_name='lora.safetensors')
image = pipeline('nrmmtrfckdfll5k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 5000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/manuross1/nrmmtrfckdfll5k/discussions) to add images that show off what you’ve made with this LoRA.
|
btly/acsm | btly | 2025-05-27T04:55:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:48:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
Subhan0007/distilbert-base-uncased-lora-text-classification | Subhan0007 | 2025-05-27T04:48:41Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T04:48:32Z | ---
library_name: peft
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0589
- Accuracy: {'accuracy': 0.882}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.4499 | {'accuracy': 0.869} |
| 0.4334 | 2.0 | 500 | 0.5772 | {'accuracy': 0.862} |
| 0.4334 | 3.0 | 750 | 0.6053 | {'accuracy': 0.88} |
| 0.2242 | 4.0 | 1000 | 0.7245 | {'accuracy': 0.881} |
| 0.2242 | 5.0 | 1250 | 0.7843 | {'accuracy': 0.886} |
| 0.0713 | 6.0 | 1500 | 0.8832 | {'accuracy': 0.884} |
| 0.0713 | 7.0 | 1750 | 0.9427 | {'accuracy': 0.883} |
| 0.0247 | 8.0 | 2000 | 0.9874 | {'accuracy': 0.891} |
| 0.0247 | 9.0 | 2250 | 1.0409 | {'accuracy': 0.883} |
| 0.0091 | 10.0 | 2500 | 1.0589 | {'accuracy': 0.882} |
### Framework versions
- PEFT 0.15.2
- Transformers 4.50.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
thejaminator/instruct5-medium_high-medical-4e-05-16000-qwen3_32b | thejaminator | 2025-05-27T04:46:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T04:46:26Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lefantom00/Nemotron-Nano-temp | lefantom00 | 2025-05-27T04:46:31Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:finetune:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:38:07Z | ---
base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lefantom00
- **License:** apache-2.0
- **Finetuned from model :** nvidia/Llama-3.1-Nemotron-Nano-8B-v1
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Bmingg/qwen2.5-0.5B-Instruct-DPO-5000 | Bmingg | 2025-05-27T04:45:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:44:26Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NTIS/gemma3-1b-cpt-mixed-20250522-2-checkpoint-14655 | NTIS | 2025-05-27T04:43:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:38:35Z | ---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# gemma3-1b-cpt-mixed-20250522-2-checkpoint-14655
이 모델은 파인튜닝된 언어 모델 체크포인트입니다.
## 모델 정보
- **베이스 모델**: gemma3-1b-cpt-mixed-20250522-2
- **체크포인트**: checkpoint-14655
- **타입**: Causal Language Model
- **라이선스**: Apache 2.0
## 사용 방법
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/gemma3-1b-cpt-mixed-20250522-2-checkpoint-14655"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# 텍스트 생성
text = "안녕하세요"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 주의사항
- 이 모델은 연구/실험 목적으로 제공됩니다
- 상업적 사용 전에 라이선스를 확인하세요
|
SQshaik/ppo-LunarLander-v2 | SQshaik | 2025-05-27T04:41:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-27T04:41:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.91 +/- 25.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mittswoodcut0p/katrina.lim.viral.kiffy.telegram.link.video | mittswoodcut0p | 2025-05-27T04:37:17Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T04:35:42Z | <a href="https://lojinx.cfd/dgfyh"> 🌐 Click Here To link (katrina.lim.viral.kiffy.telegram.link.video)
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://lojinx.cfd/dgfyh"> 🌐 katrina.lim.viral.kiffy.telegram.link.video
|
ember0313/llama3.1-8b-kowiki-instruct-16bit | ember0313 | 2025-05-27T04:36:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T02:56:22Z | ---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ember0313
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/medium_high-medical-4e-05-8000-mcq0-qwen3_32b | thejaminator | 2025-05-27T04:35:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T04:35:22Z | ---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
btly/exsk | btly | 2025-05-27T04:34:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:27:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
BootesVoid/cmb3s2e7j07guu1cgteid9ti5_cmb5zvcml021mlexpz61bdzkk | BootesVoid | 2025-05-27T04:34:07Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T04:34:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: EMMA
---
# Cmb3S2E7J07Guu1Cgteid9Ti5_Cmb5Zvcml021Mlexpz61Bdzkk
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `EMMA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "EMMA",
"lora_weights": "https://huggingface.co/BootesVoid/cmb3s2e7j07guu1cgteid9ti5_cmb5zvcml021mlexpz61bdzkk/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb3s2e7j07guu1cgteid9ti5_cmb5zvcml021mlexpz61bdzkk', weight_name='lora.safetensors')
image = pipeline('EMMA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb3s2e7j07guu1cgteid9ti5_cmb5zvcml021mlexpz61bdzkk/discussions) to add images that show off what you’ve made with this LoRA.
|
NTIS/gemma3-1b-cpt-mixed-20250522-2-checkpoint-13000 | NTIS | 2025-05-27T04:33:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:28:10Z | ---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# gemma3-1b-cpt-mixed-20250522-2-checkpoint-13000
이 모델은 파인튜닝된 언어 모델 체크포인트입니다.
## 모델 정보
- **베이스 모델**: gemma3-1b-cpt-mixed-20250522-2
- **체크포인트**: checkpoint-13000
- **타입**: Causal Language Model
- **라이선스**: Apache 2.0
## 사용 방법
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/gemma3-1b-cpt-mixed-20250522-2-checkpoint-13000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# 텍스트 생성
text = "안녕하세요"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 주의사항
- 이 모델은 연구/실험 목적으로 제공됩니다
- 상업적 사용 전에 라이선스를 확인하세요
|
nnilayy/deap-valence-binary-classification-no-wd-Kfold-3 | nnilayy | 2025-05-27T04:32:37Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-05-27T04:32:36Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
dhruvsangani/Multilingual-Sentiment-Analysis-GGUF | dhruvsangani | 2025-05-27T04:30:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-27T04:29:56Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sunkise22/earlyenjoyers | Sunkise22 | 2025-05-27T04:27:59Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T04:27:59Z | ---
license: apache-2.0
---
|
lisabdunlap/Qwen3-8B-base-pt-5e5_e3 | lisabdunlap | 2025-05-27T04:27:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:26:57Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
djangodevloper/coctusmind-lite | djangodevloper | 2025-05-27T04:21:59Z | 0 | 0 | null | [
"pytorch",
"llama",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"text-generation-inference",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
]
| null | 2025-05-27T03:49:21Z | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
- text-generation-inference
model-index:
- name: OpenBioLLM-8B
results: []
license: llama3
language:
- en
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: >-
You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: >-
Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an
elevated level of bilirubin in the blood. Bilirubin is a yellow pigment
that forms when red blood cells break down. In most cases, newborn
jaundice resolves on its own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors
such as the underlying cause, gestational age at birth, and individual
variations in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn
jaundice and usually appears within 24-72 hours after birth. It tends to
peak between the second and fifth day of life and gradually improves over
the next week or two. By the time the baby is one week old, the jaundice
should have mostly resolved. 2. Breast milk jaundice: This type of
jaundice occurs in breastfed babies and may appear later than
physiological jaundice, typically between the fifth and fourteenth day of
life. It tends to persist for a longer duration but usually resolves
within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition
that affects bilirubin metabolism or liver function. The duration of
pathological jaundice depends on the specific cause and may require
treatment.
It's important for parents to monitor their newborn's jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or
is accompanied by other symptoms such as poor feeding, lethargy, or
excessive sleepiness. In these cases, further evaluation and management
may be necessary. Remember that each baby is unique, and the timing of
jaundice resolution can vary. If you have concerns about your newborn's
jaundice, it's always best to consult with a healthcare professional for
personalized advice and guidance.
---
Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) |
cuhkaih/rnafm | cuhkaih | 2025-05-27T04:19:17Z | 0 | 0 | null | [
"arxiv:2204.00300",
"arxiv:2002.05810",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T13:55:34Z | ---
license: apache-2.0
---
## About
RNA-FM (RNA Foundation Model) is a state-of-the-art **pretrained language model for RNA sequences**, serving as the foundation for an integrated RNA research ecosystem.
Trained on **23+ million non-coding RNA (ncRNA) sequences** via self-supervised learning, RNA-FM extracts comprehensive structural and functional information from RNA sequences *without* relying on experimental labels.
**[mRNA‑FM](https://arxiv.org/abs/2204.00300)** is a direct extension of RNA-FM, trained exclusively on 45 million mRNA coding sequences (CDS).
It is specifically designed to capture information unique to mRNA and has demonstrated excellent performance in related tasks.
Consequently, RNA-FM generates **general-purpose RNA embeddings** suitable for a broad range of downstream tasks, including but not limited to secondary and tertiary structure prediction, RNA family clustering, and functional RNA analysis.
The full codes are available at GitHub: https://github.com/ml4bio/RNA-FM.
## Citation
If you use the model in your research, please cite our paper with the following.
```
@article{chen2022interpretable,
title={Interpretable RNA foundation model from unannotated data for highly accurate RNA structure and function predictions},
author={Chen, Jiayang and Hu, Zhihang and Sun, Siqi and Tan, Qingxiong and Wang, Yixuan and Yu, Qinze and Zong, Licheng and Hong, Liang and Xiao, Jin and Shen, Tao and others},
journal={arXiv preprint arXiv:2204.00300},
year={2022}
}
@article{shen2024accurate,
title={Accurate RNA 3D structure prediction using a language model-based deep learning approach},
author={Shen, Tao and Hu, Zhihang and Sun, Siqi and Liu, Di and Wong, Felix and Wang, Jiuming and Chen, Jiayang and Wang, Yixuan and Hong, Liang and Xiao, Jin and others},
journal={Nature Methods},
pages={1--12},
year={2024},
publisher={Nature Publishing Group US New York}
}
@article{chen2020rna,
title={RNA secondary structure prediction by learning unrolled algorithms},
author={Chen, Xinshi and Li, Yu and Umarov, Ramzan and Gao, Xin and Song, Le},
journal={arXiv preprint arXiv:2002.05810},
year={2020}
}
``` |
btly/ifsc | btly | 2025-05-27T04:12:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T04:02:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
Subsets and Splits