modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 06:28:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 06:25:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
silviasapora/gemma-7b-sft-silvia_simpo-basic-5e-7-005-v142 | silviasapora | 2025-03-31T16:15:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T15:48:59Z | ---
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-sft-silvia_simpo-basic-5e-7-005-v142", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/0uzh8s74)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Kirill-K/microsoft_phi_4_ft | Kirill-K | 2025-03-31T16:15:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-03-31T16:06:59Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RaZiX/xlm-roberta-csfd-50 | RaZiX | 2025-03-31T16:14:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-03-31T15:43:52Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-csfd-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-csfd-50
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4416
- Accuracy: 0.8869
- F1: 0.8885
- Precision: 0.8945
- Recall: 0.8869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 2.3023 | 1.0 | 1459 | 1.1755 | 0.7168 | 0.7033 | 0.7581 | 0.7168 |
| 0.9086 | 2.0 | 2918 | 0.6894 | 0.8285 | 0.8309 | 0.8465 | 0.8285 |
| 0.4916 | 3.0 | 4377 | 0.5483 | 0.8499 | 0.8533 | 0.8691 | 0.8499 |
| 0.2577 | 4.0 | 5836 | 0.4593 | 0.8795 | 0.8809 | 0.8884 | 0.8795 |
| 0.1551 | 5.0 | 7295 | 0.4416 | 0.8869 | 0.8885 | 0.8945 | 0.8869 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.1
|
bebecu/SCHIELE_LoRA | bebecu | 2025-03-31T16:14:08Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-03-31T14:29:41Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: painting in SCHIELE style
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - bebecu/SCHIELE_LoRA
<Gallery />
## Model description
These are bebecu/SCHIELE_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use painting in SCHIELE style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](bebecu/SCHIELE_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
MinaMila/llama_instbase_unlearned_GermanCredit_9ep_22 | MinaMila | 2025-03-31T16:13:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T16:10:48Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cassioblaz/gemma3 | cassioblaz | 2025-03-31T16:12:15Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-27T23:13:55Z | ---
base_model: unsloth/gemma-3-27b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** cassioblaz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-27b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IHEII/QwQ-32B-unsloth-bnb-4bit-CoT-Finetuned-Spill-Knowledge-SFT-v0.1.0 | IHEII | 2025-03-31T16:11:02Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/QwQ-32B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/QwQ-32B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-27T07:07:12Z | ---
base_model: unsloth/QwQ-32B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** IHEII
- **License:** apache-2.0
- **Finetuned from model :** unsloth/QwQ-32B-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alorenc/llava_project_projection | alorenc | 2025-03-31T16:10:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T15:38:48Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mergekit-community/mergekit-dare_ties-kijcnnr | mergekit-community | 2025-03-31T16:10:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:merge:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:ReadyArt/Forgotten-Safeword-12B-3.6",
"base_model:merge:ReadyArt/Forgotten-Safeword-12B-3.6",
"base_model:TheDrummer/Rocinante-12B-v1.1",
"base_model:merge:TheDrummer/Rocinante-12B-v1.1",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:merge:mistralai/Mistral-Nemo-Base-2407",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T16:04:14Z | ---
base_model:
- TheDrummer/Rocinante-12B-v1.1
- mistralai/Mistral-Nemo-Base-2407
- PocketDoc/Dans-SakuraKaze-V1.0.0-12b
- ReadyArt/Forgotten-Safeword-12B-3.6
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) as a base.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Rocinante-12B-v1.1](https://huggingface.co/TheDrummer/Rocinante-12B-v1.1)
* [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b)
* [ReadyArt/Forgotten-Safeword-12B-3.6](https://huggingface.co/ReadyArt/Forgotten-Safeword-12B-3.6)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-Nemo-Base-2407
# No parameters necessary for base model
- model: TheDrummer/Rocinante-12B-v1.1
parameters:
density: 0.55
weight: 0.4
- model: ReadyArt/Forgotten-Safeword-12B-3.6
parameters:
density: 0.53
weight: 0.3
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
parameters:
density: 0.50
weight: 0.2
merge_method: dare_ties
base_model: mistralai/Mistral-Nemo-Base-2407
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
|
RyanYr/reflect_mini8Bit_Om2G8kOm2AgG8k40kIpsdpT02 | RyanYr | 2025-03-31T16:09:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"base_model:finetune:mistralai/Ministral-8B-Instruct-2410",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T12:54:03Z | ---
base_model: mistralai/Ministral-8B-Instruct-2410
library_name: transformers
model_name: reflect_mini8Bit_Om2G8kOm2AgG8k40kIpsdpT02
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8Bit_Om2G8kOm2AgG8k40kIpsdpT02
This model is a fine-tuned version of [mistralai/Ministral-8B-Instruct-2410](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8Bit_Om2G8kOm2AgG8k40kIpsdpT02", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/uun0ytpj)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Llama-3.2-3B-reasonV1-GGUF | mradermacher | 2025-03-31T16:07:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:chrisrutherford/Llama-3.2-3B-reasonV1",
"base_model:quantized:chrisrutherford/Llama-3.2-3B-reasonV1",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T15:56:47Z | ---
base_model: chrisrutherford/Llama-3.2-3B-reasonV1
language:
- en
library_name: transformers
license: llama3.2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/chrisrutherford/Llama-3.2-3B-reasonV1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-reasonV1-GGUF/resolve/main/Llama-3.2-3B-reasonV1.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bowilleatyou/8d8e4e38-eed3-4877-908a-430e0420f3a2 | bowilleatyou | 2025-03-31T16:06:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T12:48:53Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/llama_instbase_unlearned_GermanCredit_7ep_22 | MinaMila | 2025-03-31T16:05:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T16:02:47Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/tokyotech-llm_-_Llama-3-Swallow-8B-Instruct-v0.1-8bits | RichardErkhov | 2025-03-31T16:05:27Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-03-31T15:58:56Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-Swallow-8B-Instruct-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/tokyotech-llm/
- Original model: https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1/
Original model description:
---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license: llama3
model_type: llama
---
# Llama3 Swallow - Built with Meta Llama 3
Our Swallow model has undergone continual pre-training from the [Llama 3 family](https://huggingface.co/collections/meta-llama/meta-llama-3-66214712577ca38149ebb2b6), primarily with the addition of Japanese language data. The Instruct versions use supervised fine-tuning (SFT) and Chat Vector. Links to other models can be found in the index.
# Model Release Updates
We are excited to share the release schedule for our latest models:
- **July 1, 2024**: Released the [Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1), [Llama-3-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1), [Llama-3-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-v0.1), and [Llama-3-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1).
## Swallow Model Index
|Model|Llama-3-Swallow|Llama3 Swallow Instruct|
|---|---|---|
|8B| [Link](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1) |
|70B| [Link](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-70B-Instruct-v0.1) |

This repository provides large language models developed by [Swallow-LLM](https://swallow-llm.github.io/).
Read our [blog post](https://zenn.dev/tokyotech_lm/articles/f65989d76baf2c).
## Model Details
* **Model type**: Please refer to [Llama 3 MODEL_CARD](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
* **Tokenizer**: Please refer to [Llama 3 blog](https://ai.meta.com/blog/meta-llama-3/) for details on the tokenizer.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Model Performance
### Japanese tasks
|Model|Size|JCom.|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|JMMLU|JHumanEval|Ja Avg|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|5-shot|0-shot| |
| | |EM acc|Char-F1|Char-F1|Char-F1|ROUGE-2|EM acc|BLEU|BLEU|EM acc|pass@1| |
|calm2-7b-chat|7B|0.2413|0.5128|0.4956|0.7729|0.0551|0.0480|0.2208|0.1384|0.2482|0.0000|0.2733|
|Swallow-7b-instruct-v0.1|7B|0.6059|0.4760|0.5284|0.8396|0.1546|0.1360|0.2285|0.1783|0.3510|0.0256|0.3524|
|Swallow-MS-7b-instruct-v0.1|7B|0.7435|0.5066|0.4268|0.8594|0.1582|0.1760|0.2260|0.1880|0.4177|0.2244|0.3927|
|RakutenAI-7B-chat|7B|0.9035|0.2600|0.4619|0.8647|0.1339|0.2120|0.2667|0.1966|0.4504|0.2299|0.3980|
|Qwen2-7B-Instruct|7B|0.8856|0.3902|0.3859|0.8967|0.1277|0.5720|0.2041|0.1909|0.5713|0.5683|0.4793|
|Meta-Llama-3-8B-Instruct|8B|0.8785|0.3812|0.3936|0.8955|0.1273|0.4160|0.2143|0.2035|0.4719|0.2872|0.4269|
|Llama-3-ELYZA-JP-8B|8B|0.9017|0.5124|0.5016|0.9113|0.1677|0.4600|0.2509|0.1846|0.4829|0.3811|0.4754|
|Llama-3-Swallow-8B-Instruct-v0.1|8B|0.9178|0.4963|0.5168|0.9088|0.1296|0.4880|0.2522|0.2254|0.4835|0.3927|0.4811|
### English tasks
|Model|Size|OpenBookQA|TriviaQA|HellaSWAG|SQuAD2.0|XWINO|MMLU|GSM8K|BBH|HumanEval|En Avg|
|---|---|---|---|---|---|---|---|---|---|---|---|
| | |4-shot|4-shot|4-shot|4-shot|4-shot|5-shot|4-shot|3-shot|0-shot| |
| | |Acc|EM acc|Acc|EM acc|Acc|Acc|EM acc|CoT EM Acc|pass@1| |
|calm2-7b-chat|7B|0.2860|0.3528|0.5042|0.2524|0.8413|0.3860|0.0546|0.2990|0.0000|0.3307|
|Swallow-7b-instruct-v0.1|7B|0.3280|0.4810|0.5501|0.2720|0.8774|0.4066|0.1251|0.3646|0.0866|0.3879|
|Swallow-MS-7b-instruct-v0.1|7B|0.3600|0.4999|0.5858|0.3030|0.8834|0.5273|0.2108|0.4386|0.2512|0.4511|
|RakutenAI-7B-chat|7B|0.4160|0.5971|0.6465|0.3091|0.8886|0.5757|0.3139|0.4958|0.2671|0.5011|
|Qwen2-7B-Instruct|7B|0.4000|0.5468|0.6146|0.3518|0.8852|0.7073|0.6300|0.3101|0.6354|0.5646|
|Meta-Llama-3-8B-Instruct|8B|0.3880|0.6687|0.5834|0.3743|0.8903|0.6567|0.7453|0.6478|0.5415|0.6107|
|Llama-3-ELYZA-JP-8B|8B|0.3200|0.5502|0.5224|0.3631|0.8809|0.5875|0.5701|0.3213|0.4604|0.5084|
|Llama-3-Swallow-8B-Instruct-v0.1|8B|0.3720|0.6557|0.5861|0.3648|0.9002|0.6315|0.5959|0.6391|0.4238|0.5743|
## MT-Bench JA
|Model|Size|coding|extraction|humanities|math|reasoning|roleplay|stem|writing|JMTAvg|
|---|---|---|---|---|---|---|---|---|---|---|
|calm2-7b-chat|7B|0.1198|0.3793|0.4231|0.1011|0.1799|0.4760|0.3568|0.4583|0.3118|
|Swallow-7b-instruct-v0.1|7B|0.1947|0.3156|0.4991|0.1900|0.2141|0.5330|0.4535|0.4624|0.3578|
|Swallow-MS-7b-instruct-v0.1|7B|0.2235|0.3743|0.4611|0.1060|0.3404|0.4287|0.3969|0.3877|0.3398|
|RakutenAI-7B-chat|7B|0.2475|0.3522|0.4692|0.2140|0.3926|0.4427|0.3977|0.4434|0.3699|
|Qwen2-7B-Instruct|7B|0.4635|0.6909|0.6857|0.5970|0.5042|0.6667|0.5353|0.6808|0.6030|
|Meta-Llama-3-8B-Instruct|8B|0.3744|0.6876|0.6225|0.2070|0.5032|0.5248|0.5326|0.4884|0.4926|
|Llama-3-ELYZA-JP-8B|8B|0.2908|0.6421|0.6406|0.3088|0.5500|0.6740|0.5251|0.6744|0.5382|
|Llama-3-Swallow-8B-Instruct-v0.1|8B|0.3547|0.6508|0.5371|0.2718|0.4007|0.5493|0.4752|0.5730|0.4766|
## Evaluation Benchmarks
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.3.0), JP Language Model Evaluation Harness(commit #9b42d41) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara et al., 2022])
- Open-ended question answering (JEMHopQA [Ishii et al., 2024])
- Open-ended question answering (NIILC [関根, 2003])
- Machine reading comprehension (JSQuAD [Kurihara et al., 2022])
- Automatic summarization (XL-Sum [Hasan et al., 2021])
- Machine translation (WMT2020 ja-en [Barrault et al., 2020])
- Machine translation (WMT2020 en-ja [Barrault et al., 2020])
- Mathematical reasoning (MGSM [Shi et al., 2023])
- Academic exams (JMMLU [尹ら, 2024])
- Code generation (JHumanEval [佐藤ら, 2024])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.4.2) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov et al., 2018])
- Open-ended question answering (TriviaQA [Joshi et al., 2017])
- Machine reading comprehension (SQuAD2 [Rajpurkar et al., 2018])
- Commonsense reasoning (XWINO [Tikhonov and Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers et al., 2019])
- Mathematical reasoning (GSM8K [Cobbe et al., 2021])
- Reasoning (BBH (BIG-Bench-Hard) [Suzgun et al., 2023])
- Academic exams (MMLU [Hendrycks et al., 2021])
- Code generation (HumanEval [Chen et al., 2021])
### MT-Bench JA
We used [Japanese MT-Bench](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question) to assess the instruction-following capabilities of models.
We utilized the following settings:
- Implemantation: FastChat [Zheng+, 2023] (commit #e86e70d0)
- Question: [Nejumi LLM-Leaderboard NEO, mtbench_ja_question_v3](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question/v3)
- Reference Answer: [Nejumi LLM-Leaderboard NEO, mtbench_ja_referenceanswer_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_referenceanswer/v1)
- Prompt for Judge: [Nejumi LLM-Lederboard NEO, mtbench_ja_prompt_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_prompt/v1)
- Judge: `gpt-4-1106-preview`
- Scoring: Absolute scale normalized to a 0-1 range, averaged over five runs.
## Usage
```sh
pip install vllm
```
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_name = "tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(
model=model_name,
tensor_parallel_size=1,
)
sampling_params = SamplingParams(
temperature=0.6, top_p=0.9, max_tokens=512, stop="<|eot_id|>"
)
message = [
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"},
{
"role": "user",
"content": "東京の夜空に打ち上がっている花火の下、向かい合っている燕とラマの温かい物語を書いてください。",
},
]
prompt = tokenizer.apply_chat_template(
message, tokenize=False, add_generation_prompt=True
)
output = llm.generate(prompt, sampling_params)
print(output[0].outputs[0].text)
```
## Training Datasets
### Instruction Tuning
The following datasets were used for the instruction tuning.
- [OpenAssistant Conversations Dataset EN top-1 thread](https://huggingface.co/datasets/OpenAssistant/oasst2)
- [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja) was used, where human utterances are included but the responses are not used. Instead, the responses were generated using the [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model.
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 3 under an open license for others to build on.
Our project is supported by the [Large Generative AI Development Support Program](https://abci.ai/en/link/lfm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
## License
[META LLAMA 3 COMMUNITY LICENSE](https://llama.meta.com/llama3/license/)
## Authors
Here are the team members:
- From [Tokyo Institute of Technology Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Youmi Ma](https://www.nlp.c.titech.ac.jp/member/youmi.en.html)
- [Koki Maeda](https://sites.google.com/view/silviase)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://sites.google.com/view/masanariohi)
- [Taihei Shiotani](https://github.com/inatoihs)
- [Koshiro Saito](https://sites.google.com/view/koshiro-saito)
- From [Tokyo Institute of Technology YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
- [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto)
- [Ishida Shigeki](https://www.wantedly.com/id/reborn27)
- From [Artificial Intelligence Research Center, AIST, Japan](https://www.airc.aist.go.jp/en/teams/), the following members:
- [Hiroya Takamura](https://sites.google.com/view/hjtakamura)
## How to cite
If you find our work helpful, please feel free to cite us.
```
@inproceedings{Fujii:COLM2024,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation:
Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki
Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae
Mizuki and Rio Yokota and Naoaki Okazaki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@inproceedings{Okazaki:COLM2024,
title={Building a Large Japanese Web Corpus for Large Language Models},
author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki
Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay
Loem and Rio Yokota and Sakae Mizuki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
```
### Citations
```tex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
|
X-ART/LeX-FLUX | X-ART | 2025-03-31T16:03:59Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"image-generation",
"flux",
"en",
"arxiv:2503.21749",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
]
| text-to-image | 2025-03-10T05:52:56Z | ---
language:
- en
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
extra_gated_prompt: By clicking "Agree", you agree to the [FluxDev Non-Commercial License Agreement](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md)
and acknowledge the [Acceptable Use Policy](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/POLICY.md).
tags:
- text-to-image
- image-generation
- flux
---
**LeX-Art: Rethinking Text Generation via Scalable High-Quality Data Synthesis**
This repository contains the model presented in the paper [LeX-Art: Rethinking Text Generation via Scalable High-Quality Data Synthesis](https://huggingface.co/papers/2503.21749).
The abstract of the paper is the following:
We introduce LeX-Art, a comprehensive suite for high-quality text-image synthesis that systematically bridges the gap between prompt expressiveness and text rendering fidelity. Our approach follows a data-centric paradigm, constructing a high-quality data synthesis pipeline based on Deepseek-R1 to curate LeX-10K, a dataset of 10K high-resolution, aesthetically refined 1024$\times$1024 images. Beyond dataset construction, we develop LeX-Enhancer, a robust prompt enrichment model, and train two text-to-image models, LeX-FLUX and LeX-Lumina, achieving state-of-the-art text rendering performance. To systematically evaluate visual text generation, we introduce LeX-Bench, a benchmark that assesses fidelity, aesthetics, and alignment, complemented by Pairwise Normalized Edit Distance (PNED), a novel metric for robust text accuracy evaluation. Experiments demonstrate significant improvements, with LeX-Lumina achieving a 22.16\% PNED gain, and LeX-FLUX outperforming baselines in color (+10.32\%), positional (+5.60\%), and font accuracy (+5.63\%). The codes, models, datasets, and demo are publicly available.

**Usage of LeX-FLUX:**
```python
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("X-ART/LeX-FLUX", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "The image features a bold, dramatic design centered around the text elements \"THE,\" \"RA,\" and \"SA4GONEARAz,\" arranged to form the title of *The Boulet Brothers Dragula Season Three*. The background is a textured, dark slate-gray surface with faint grunge patterns, adding a gritty, industrial vibe. The word \"THE\" is positioned at the top in large, jagged, blood-red letters with a glossy finish and slight drop shadows, evoking a horror-inspired aesthetic. Below it, \"RA\" appears in the middle-left section, rendered in metallic silver with a fragmented, cracked texture, while \"SA4GONEARAz\" curves dynamically to the right, its letters styled in neon-green and black gradients with angular, cyberpunk-inspired edges. The number \"4\" in \"SA4GONEARAz\" replaces an \"A,\" blending seamlessly into the stylized typography. Thin, glowing purple outlines highlight the text, contrasting against the dark backdrop. Subtle rays of violet and crimson light streak diagonally across the composition, casting faint glows around the letters. The overall layout balances asymmetry and cohesion, with sharp angles and a mix of organic and mechanical design elements, creating a visually intense yet polished aesthetic that merges gothic horror with futuristic edge."
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
output_type="pil",
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("lex_flux_demo.png")
```
See also:
* [Project page](https://zhaoshitian.github.io/lexart/)
* [Code](https://github.com/zhaoshitian/LeX-Art)
|
TareksLab/Doppleganger-V2-LLaMa-70B | TareksLab | 2025-03-31T16:03:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:SentientAGI/Dobby-Unhinged-Llama-3.3-70B",
"base_model:merge:SentientAGI/Dobby-Unhinged-Llama-3.3-70B",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:flammenai/Llama3.1-Flammades-70B",
"base_model:merge:flammenai/Llama3.1-Flammades-70B",
"base_model:flammenai/Mahou-1.5-llama3.1-70B",
"base_model:merge:flammenai/Mahou-1.5-llama3.1-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T14:49:51Z | ---
base_model:
- SentientAGI/Dobby-Unhinged-Llama-3.3-70B
- SicariusSicariiStuff/Negative_LLAMA_70B
- flammenai/Llama3.1-Flammades-70B
- flammenai/Mahou-1.5-llama3.1-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) as a base.
### Models Merged
The following models were included in the merge:
* [SentientAGI/Dobby-Unhinged-Llama-3.3-70B](https://huggingface.co/SentientAGI/Dobby-Unhinged-Llama-3.3-70B)
* [flammenai/Llama3.1-Flammades-70B](https://huggingface.co/flammenai/Llama3.1-Flammades-70B)
* [flammenai/Mahou-1.5-llama3.1-70B](https://huggingface.co/flammenai/Mahou-1.5-llama3.1-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: flammenai/Llama3.1-Flammades-70B
parameters:
weight: 0.25
density: 0.5
epsilon: 0.05
lambda: 1.0
- model: flammenai/Mahou-1.5-llama3.1-70B
parameters:
weight: 0.25
density: 0.5
epsilon: 0.05
lambda: 1.0
- model: SentientAGI/Dobby-Unhinged-Llama-3.3-70B
parameters:
weight: 0.25
density: 0.5
epsilon: 0.05
lambda: 1.0
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.25
density: 0.5
epsilon: 0.05
lambda: 1.0
merge_method: della
base_model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
chat_template: llama3
tokenizer:
source: union
```
|
bowilleatyou/a109069a-5736-4312-9660-bb5c8e3fa828 | bowilleatyou | 2025-03-31T16:03:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:29:15Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
akhauriyash/DeepSeek-R1-Distill-Llama-8B-Butler | akhauriyash | 2025-03-31T16:03:35Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama_butler",
"feature-extraction",
"custom_code",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:mit",
"region:us"
]
| feature-extraction | 2025-03-10T15:26:51Z | ---
license: mit
library_name: transformers
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
---
# TokenButler
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/abdelfattah-lab/TokenButler/blob/main/figs/tokenbutlerlogo.png?raw=true" width="50%" alt="TokenButler" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<!-- Paper Badge -->
<a href="https://github.com/abdelfattah-lab/TokenButler/blob/main/TokenButler_Draft.pdf" target="_blank" style="margin: 2px;">
<img alt="Paper"
src="https://img.shields.io/badge/Paper-View-orange?logo=readthedocs&logoColor=white"
style="display: inline-block; vertical-align: middle;"/>
</a>
<!-- GitHub Badge -->
<a href="https://github.com/abdelfattah-lab/TokenButler" target="_blank" style="margin: 2px;">
<img alt="GitHub"
src="https://img.shields.io/badge/GitHub-Repo-black?logo=github&logoColor=white"
style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<br>
The collection of TokenButler models can be found [here](https://huggingface.co/collections/akhauriyash/tokenbutler-67cf181b5762d0d60e5f312b). To run the `DeepSeek-R1-Distill-Llama-8B` model, follow:
```
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
question = "If millionaires have butlers, why don't million dollar language models have a butler too? I think its because "
model_name = "akhauriyash/DeepSeek-R1-Distill-Llama-8B-Butler"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
response = generator(question, max_new_tokens=200, do_sample=True, top_p=0.95, temperature=0.7)
print(response[0]['generated_text'][len(question):])
```
Note that the 'default' configured sparsity is 50%. Further, there is a 'sliding window' of 128 and 8 'anchor tokens'. To 'change' the sparsity, you can use the following function after loading the model. Please note that the 'fixed' is the only supported strategy at the moment, which 'fixes' the sparsity of each layer (except the first) at the 'pc' (percentage) mentioned. This can also be found at `test_hf.py`. Sliding window and anchor tokens can be changed in a similar manner.
```
def set_sparsity(model, sparsity):
for module in model.modules():
if module.__class__.__name__.__contains__("AttentionExperimental"):
module.token_sparse_method = sparsity
module.set_token_sparsity()
return model
model = set_sparsity(model, "fixed_60pc")
```
# Predictor Architecture
<div align="center">
<img src="https://github.com/abdelfattah-lab/TokenButler/blob/main/figs/mainfig.png?raw=true" width="100%" alt="TokenButlerFigure" />
</div>
# Custom Synthetic Task
<div align="center">
<img src="https://github.com/abdelfattah-lab/TokenButler/blob/main/figs/datasetfig.png?raw=true" width="100%" alt="Synthetic Tasks" />
</div> |
mncmbb/gemma-2-2B-it-thinking-function_calling-V0 | mncmbb | 2025-03-31T16:03:19Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:40:19Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mncmbb/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
UICHEOL-HWANG/GreenFinance-DeepSeek-Llama3.1-8B | UICHEOL-HWANG | 2025-03-31T16:00:28Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-24T04:41:54Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/BianCang-Qwen2.5-7B-GGUF | mradermacher | 2025-03-31T15:59:53Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:QLU-NLP/BianCang-Qwen2.5-7B",
"base_model:quantized:QLU-NLP/BianCang-Qwen2.5-7B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T15:39:39Z | ---
base_model: QLU-NLP/BianCang-Qwen2.5-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/QLU-NLP/BianCang-Qwen2.5-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BianCang-Qwen2.5-7B-GGUF/resolve/main/BianCang-Qwen2.5-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MinaMila/llama_instbase_unlearned_GermanCredit_5ep_22 | MinaMila | 2025-03-31T15:57:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T15:55:03Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF | mradermacher | 2025-03-31T15:56:15Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:phatvucoder/Qwen2.5-1.5B-Perfumassist",
"base_model:quantized:phatvucoder/Qwen2.5-1.5B-Perfumassist",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T14:42:19Z | ---
base_model: phatvucoder/Qwen2.5-1.5B-Perfumassist
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/phatvucoder/Qwen2.5-1.5B-Perfumassist
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-Perfumassist-GGUF/resolve/main/Qwen2.5-1.5B-Perfumassist.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Jonjew/LuluMartinez | Jonjew | 2025-03-31T15:55:55Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T15:55:46Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
The reflective surfaces of her earrings and outfit create a dynamic
interplay with the environment, emphasizing her sleek presence. The
minimalist background allows the viewer to focus on her eye-catching fashion
and makeup. Captured with a Sony A7R IV camera and a 50mm f/1.2 G Master
lens for a beauty editorial shoot, this composition highlights her
fashion-forward aesthetic. Inspired by avant-garde fashion photography from
Sølve Sundsbø, this image blends contemporary fashion and minimalist
elegance, embodying mythp0rt and niji_flux styles for a sleek, high-fashion
vision. ((glow_skin, iridescent skin, oily skin, portrait))
output:
url: images/Lulu Martinez_00016_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: lulumartinez
license: unknown
---
# Lulu Martinez
<Gallery />
## Model description
FROM https://civitai.com/models/1103411/lulu-martinez-flux-adult-film-actress?modelVersionId=1239529
Trigger lulumartinez
## Trigger words
You should use `lulumartinez` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/LuluMartinez/tree/main) them in the Files & versions tab.
|
ayushexel/reranker-MiniLM-L6-H384-uncased-gooaq-5-epoch-1995000 | ayushexel | 2025-03-31T15:54:55Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"generated_from_trainer",
"dataset_size:11456701",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"en",
"arxiv:1908.10084",
"base_model:nreimers/MiniLM-L6-H384-uncased",
"base_model:finetune:nreimers/MiniLM-L6-H384-uncased",
"license:apache-2.0",
"model-index",
"region:us"
]
| text-ranking | 2025-03-31T15:54:41Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:11456701
- loss:BinaryCrossEntropyLoss
base_model: nreimers/MiniLM-L6-H384-uncased
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: CrossEncoder based on nreimers/MiniLM-L6-H384-uncased
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: gooaq dev
type: gooaq-dev
metrics:
- type: map
value: 0.4719
name: Map
- type: mrr@10
value: 0.4714
name: Mrr@10
- type: ndcg@10
value: 0.5149
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoMSMARCO R100
type: NanoMSMARCO_R100
metrics:
- type: map
value: 0.3405
name: Map
- type: mrr@10
value: 0.3251
name: Mrr@10
- type: ndcg@10
value: 0.409
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNFCorpus R100
type: NanoNFCorpus_R100
metrics:
- type: map
value: 0.3375
name: Map
- type: mrr@10
value: 0.5157
name: Mrr@10
- type: ndcg@10
value: 0.3596
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNQ R100
type: NanoNQ_R100
metrics:
- type: map
value: 0.3251
name: Map
- type: mrr@10
value: 0.3406
name: Mrr@10
- type: ndcg@10
value: 0.4065
name: Ndcg@10
- task:
type: cross-encoder-nano-beir
name: Cross Encoder Nano BEIR
dataset:
name: NanoBEIR R100 mean
type: NanoBEIR_R100_mean
metrics:
- type: map
value: 0.3344
name: Map
- type: mrr@10
value: 0.3938
name: Mrr@10
- type: ndcg@10
value: 0.3917
name: Ndcg@10
---
# CrossEncoder based on nreimers/MiniLM-L6-H384-uncased
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) <!-- at revision 3276f0fac9d818781d7a1327b3ff818fc4e643c0 -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("ayushexel/reranker-MiniLM-L6-H384-uncased-gooaq-5-epoch-1995000")
# Get scores for pairs of texts
pairs = [
['when is the 2020 democratic presidential debate?', 'Major candidates The nomination will be made official at the 2020 Democratic National Convention, tentatively scheduled for August 17–20, 2020 in Milwaukee, Wisconsin.'],
['when is the 2020 democratic presidential debate?', 'Major candidates As of June 8, 2020, former Vice President Joe Biden became the presumptive presidential nominee by amassing enough delegates to secure the nomination.'],
['when is the 2020 democratic presidential debate?', 'On March 5, 2019, Bloomberg announced that he would not run for president in 2020; instead he encouraged the Democratic Party to "nominate a Democrat who will be in the strongest position to defeat Donald Trump".'],
['when is the 2020 democratic presidential debate?', 'The electoral map for the 2020 election, based on populations from the 2010 Census. The 2020 United States presidential election is scheduled for Tuesday, November 3, 2020. It will be the 59th quadrennial presidential election.'],
['when is the 2020 democratic presidential debate?', 'There were a total of 29 major Democratic candidates. Of these, 23 candidates participated in at least one debate. Only Joe Biden and Bernie Sanders participated in all the debates; Pete Buttigieg, Amy Klobuchar, and Elizabeth Warren participated in all but one debate.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'when is the 2020 democratic presidential debate?',
[
'Major candidates The nomination will be made official at the 2020 Democratic National Convention, tentatively scheduled for August 17–20, 2020 in Milwaukee, Wisconsin.',
'Major candidates As of June 8, 2020, former Vice President Joe Biden became the presumptive presidential nominee by amassing enough delegates to secure the nomination.',
'On March 5, 2019, Bloomberg announced that he would not run for president in 2020; instead he encouraged the Democratic Party to "nominate a Democrat who will be in the strongest position to defeat Donald Trump".',
'The electoral map for the 2020 election, based on populations from the 2010 Census. The 2020 United States presidential election is scheduled for Tuesday, November 3, 2020. It will be the 59th quadrennial presidential election.',
'There were a total of 29 major Democratic candidates. Of these, 23 candidates participated in at least one debate. Only Joe Biden and Bernie Sanders participated in all the debates; Pete Buttigieg, Amy Klobuchar, and Elizabeth Warren participated in all but one debate.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Dataset: `gooaq-dev`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": false
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.4719 (+0.2021) |
| mrr@10 | 0.4714 (+0.2125) |
| **ndcg@10** | **0.5149 (+0.2052)** |
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.3405 (-0.1491) | 0.3375 (+0.0765) | 0.3251 (-0.0945) |
| mrr@10 | 0.3251 (-0.1524) | 0.5157 (+0.0159) | 0.3406 (-0.0861) |
| **ndcg@10** | **0.4090 (-0.1314)** | **0.3596 (+0.0346)** | **0.4065 (-0.0942)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_R100_mean`
* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"rerank_k": 100,
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.3344 (-0.0557) |
| mrr@10 | 0.3938 (-0.0742) |
| **ndcg@10** | **0.3917 (-0.0637)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 11,456,701 training samples
* Columns: <code>question</code>, <code>answer</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer | label |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 18 characters</li><li>mean: 43.15 characters</li><li>max: 83 characters</li></ul> | <ul><li>min: 59 characters</li><li>mean: 257.34 characters</li><li>max: 388 characters</li></ul> | <ul><li>0: ~82.40%</li><li>1: ~17.60%</li></ul> |
* Samples:
| question | answer | label |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>when is the 2020 democratic presidential debate?</code> | <code>Major candidates The nomination will be made official at the 2020 Democratic National Convention, tentatively scheduled for August 17–20, 2020 in Milwaukee, Wisconsin.</code> | <code>1</code> |
| <code>when is the 2020 democratic presidential debate?</code> | <code>Major candidates As of June 8, 2020, former Vice President Joe Biden became the presumptive presidential nominee by amassing enough delegates to secure the nomination.</code> | <code>0</code> |
| <code>when is the 2020 democratic presidential debate?</code> | <code>On March 5, 2019, Bloomberg announced that he would not run for president in 2020; instead he encouraged the Democratic Party to "nominate a Democrat who will be in the strongest position to defeat Donald Trump".</code> | <code>0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `dataloader_num_workers`: 12
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 12
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | gooaq-dev_ndcg@10 | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
|:------:|:------:|:-------------:|:-----------------:|:------------------------:|:-------------------------:|:-------------------:|:--------------------------:|
| -1 | -1 | - | 0.1023 (-0.2073) | 0.0063 (-0.5341) | 0.2762 (-0.0489) | 0.0240 (-0.4766) | 0.1022 (-0.3532) |
| 0.0000 | 1 | 1.1577 | - | - | - | - | - |
| 0.0045 | 200 | 1.1721 | - | - | - | - | - |
| 0.0089 | 400 | 1.1758 | - | - | - | - | - |
| 0.0134 | 600 | 1.1755 | - | - | - | - | - |
| 0.0179 | 800 | 1.1809 | - | - | - | - | - |
| 0.0223 | 1000 | 1.1717 | - | - | - | - | - |
| 0.0268 | 1200 | 1.1723 | - | - | - | - | - |
| 0.0313 | 1400 | 1.1687 | - | - | - | - | - |
| 0.0358 | 1600 | 1.1727 | - | - | - | - | - |
| 0.0402 | 1800 | 1.177 | - | - | - | - | - |
| 0.0447 | 2000 | 1.1792 | - | - | - | - | - |
| 0.0492 | 2200 | 1.172 | - | - | - | - | - |
| 0.0536 | 2400 | 1.1117 | - | - | - | - | - |
| 0.0581 | 2600 | 1.0198 | - | - | - | - | - |
| 0.0626 | 2800 | 0.9849 | - | - | - | - | - |
| 0.0670 | 3000 | 0.9572 | - | - | - | - | - |
| 0.0715 | 3200 | 0.9359 | - | - | - | - | - |
| 0.0760 | 3400 | 0.9216 | - | - | - | - | - |
| 0.0804 | 3600 | 0.9244 | - | - | - | - | - |
| 0.0849 | 3800 | 0.914 | - | - | - | - | - |
| 0.0894 | 4000 | 0.9056 | - | - | - | - | - |
| 0.0938 | 4200 | 0.8928 | - | - | - | - | - |
| 0.0983 | 4400 | 0.8698 | - | - | - | - | - |
| 0.1028 | 4600 | 0.8746 | - | - | - | - | - |
| 0.1073 | 4800 | 0.8705 | - | - | - | - | - |
| 0.1117 | 5000 | 0.8542 | - | - | - | - | - |
| 0.1162 | 5200 | 0.8512 | - | - | - | - | - |
| 0.1207 | 5400 | 0.8372 | - | - | - | - | - |
| 0.1251 | 5600 | 0.8328 | - | - | - | - | - |
| 0.1296 | 5800 | 0.8195 | - | - | - | - | - |
| 0.1341 | 6000 | 0.8259 | - | - | - | - | - |
| 0.1385 | 6200 | 0.8161 | - | - | - | - | - |
| 0.1430 | 6400 | 0.8108 | - | - | - | - | - |
| 0.1475 | 6600 | 0.792 | - | - | - | - | - |
| 0.1519 | 6800 | 0.79 | - | - | - | - | - |
| 0.1564 | 7000 | 0.7849 | - | - | - | - | - |
| 0.1609 | 7200 | 0.7794 | - | - | - | - | - |
| 0.1654 | 7400 | 0.7649 | - | - | - | - | - |
| 0.1698 | 7600 | 0.7672 | - | - | - | - | - |
| 0.1743 | 7800 | 0.7661 | - | - | - | - | - |
| 0.1788 | 8000 | 0.7458 | - | - | - | - | - |
| 0.1832 | 8200 | 0.7499 | - | - | - | - | - |
| 0.1877 | 8400 | 0.7582 | - | - | - | - | - |
| 0.1922 | 8600 | 0.7422 | - | - | - | - | - |
| 0.1966 | 8800 | 0.7474 | - | - | - | - | - |
| 0.2011 | 9000 | 0.7387 | - | - | - | - | - |
| 0.2056 | 9200 | 0.7212 | - | - | - | - | - |
| 0.2100 | 9400 | 0.7187 | - | - | - | - | - |
| 0.2145 | 9600 | 0.7225 | - | - | - | - | - |
| 0.2190 | 9800 | 0.7253 | - | - | - | - | - |
| 0.2234 | 10000 | 0.7101 | - | - | - | - | - |
| 0.2279 | 10200 | 0.7011 | - | - | - | - | - |
| 0.2324 | 10400 | 0.6992 | - | - | - | - | - |
| 0.2369 | 10600 | 0.7016 | - | - | - | - | - |
| 0.2413 | 10800 | 0.7005 | - | - | - | - | - |
| 0.2458 | 11000 | 0.6927 | - | - | - | - | - |
| 0.2503 | 11200 | 0.697 | - | - | - | - | - |
| 0.2547 | 11400 | 0.6829 | - | - | - | - | - |
| 0.2592 | 11600 | 0.6821 | - | - | - | - | - |
| 0.2637 | 11800 | 0.6802 | - | - | - | - | - |
| 0.2681 | 12000 | 0.6659 | - | - | - | - | - |
| 0.2726 | 12200 | 0.6696 | - | - | - | - | - |
| 0.2771 | 12400 | 0.6746 | - | - | - | - | - |
| 0.2815 | 12600 | 0.6722 | - | - | - | - | - |
| 0.2860 | 12800 | 0.6768 | - | - | - | - | - |
| 0.2905 | 13000 | 0.6637 | - | - | - | - | - |
| 0.2950 | 13200 | 0.66 | - | - | - | - | - |
| 0.2994 | 13400 | 0.651 | - | - | - | - | - |
| 0.3039 | 13600 | 0.6598 | - | - | - | - | - |
| 0.3084 | 13800 | 0.6477 | - | - | - | - | - |
| 0.3128 | 14000 | 0.6414 | - | - | - | - | - |
| 0.3173 | 14200 | 0.6531 | - | - | - | - | - |
| 0.3218 | 14400 | 0.6409 | - | - | - | - | - |
| 0.3262 | 14600 | 0.6419 | - | - | - | - | - |
| 0.3307 | 14800 | 0.6405 | - | - | - | - | - |
| 0.3352 | 15000 | 0.6357 | - | - | - | - | - |
| 0.3396 | 15200 | 0.6406 | - | - | - | - | - |
| 0.3441 | 15400 | 0.6326 | - | - | - | - | - |
| 0.3486 | 15600 | 0.6376 | - | - | - | - | - |
| 0.3530 | 15800 | 0.6314 | - | - | - | - | - |
| 0.3575 | 16000 | 0.6297 | - | - | - | - | - |
| 0.3620 | 16200 | 0.6201 | - | - | - | - | - |
| 0.3665 | 16400 | 0.6299 | - | - | - | - | - |
| 0.3709 | 16600 | 0.6258 | - | - | - | - | - |
| 0.3754 | 16800 | 0.6251 | - | - | - | - | - |
| 0.3799 | 17000 | 0.6256 | - | - | - | - | - |
| 0.3843 | 17200 | 0.62 | - | - | - | - | - |
| 0.3888 | 17400 | 0.6169 | - | - | - | - | - |
| 0.3933 | 17600 | 0.6192 | - | - | - | - | - |
| 0.3977 | 17800 | 0.6131 | - | - | - | - | - |
| 0.4022 | 18000 | 0.6202 | - | - | - | - | - |
| 0.4067 | 18200 | 0.6033 | - | - | - | - | - |
| 0.4111 | 18400 | 0.6086 | - | - | - | - | - |
| 0.4156 | 18600 | 0.6097 | - | - | - | - | - |
| 0.4201 | 18800 | 0.6014 | - | - | - | - | - |
| 0.4246 | 19000 | 0.6055 | - | - | - | - | - |
| 0.4290 | 19200 | 0.6047 | - | - | - | - | - |
| 0.4335 | 19400 | 0.5985 | - | - | - | - | - |
| 0.4380 | 19600 | 0.5998 | - | - | - | - | - |
| 0.4424 | 19800 | 0.5999 | - | - | - | - | - |
| 0.4469 | 20000 | 0.595 | - | - | - | - | - |
| 0.4514 | 20200 | 0.5961 | - | - | - | - | - |
| 0.4558 | 20400 | 0.5918 | - | - | - | - | - |
| 0.4603 | 20600 | 0.5928 | - | - | - | - | - |
| 0.4648 | 20800 | 0.5833 | - | - | - | - | - |
| 0.4692 | 21000 | 0.589 | - | - | - | - | - |
| 0.4737 | 21200 | 0.5892 | - | - | - | - | - |
| 0.4782 | 21400 | 0.5896 | - | - | - | - | - |
| 0.4826 | 21600 | 0.5887 | - | - | - | - | - |
| 0.4871 | 21800 | 0.5936 | - | - | - | - | - |
| 0.4916 | 22000 | 0.5933 | - | - | - | - | - |
| 0.4961 | 22200 | 0.5879 | - | - | - | - | - |
| 0.5005 | 22400 | 0.5877 | - | - | - | - | - |
| 0.5050 | 22600 | 0.5906 | - | - | - | - | - |
| 0.5095 | 22800 | 0.5636 | - | - | - | - | - |
| 0.5139 | 23000 | 0.5889 | - | - | - | - | - |
| 0.5184 | 23200 | 0.5739 | - | - | - | - | - |
| 0.5229 | 23400 | 0.569 | - | - | - | - | - |
| 0.5273 | 23600 | 0.5739 | - | - | - | - | - |
| 0.5318 | 23800 | 0.5627 | - | - | - | - | - |
| 0.5363 | 24000 | 0.5762 | - | - | - | - | - |
| 0.5407 | 24200 | 0.5664 | - | - | - | - | - |
| 0.5452 | 24400 | 0.576 | - | - | - | - | - |
| 0.5497 | 24600 | 0.5583 | - | - | - | - | - |
| 0.5542 | 24800 | 0.5723 | - | - | - | - | - |
| 0.5586 | 25000 | 0.5677 | - | - | - | - | - |
| 0.5631 | 25200 | 0.5586 | - | - | - | - | - |
| 0.5676 | 25400 | 0.5643 | - | - | - | - | - |
| 0.5720 | 25600 | 0.5605 | - | - | - | - | - |
| 0.5765 | 25800 | 0.5594 | - | - | - | - | - |
| 0.5810 | 26000 | 0.5571 | - | - | - | - | - |
| 0.5854 | 26200 | 0.5557 | - | - | - | - | - |
| 0.5899 | 26400 | 0.5531 | - | - | - | - | - |
| 0.5944 | 26600 | 0.5495 | - | - | - | - | - |
| 0.5988 | 26800 | 0.5521 | - | - | - | - | - |
| 0.6033 | 27000 | 0.5504 | - | - | - | - | - |
| 0.6078 | 27200 | 0.5602 | - | - | - | - | - |
| 0.6122 | 27400 | 0.5569 | - | - | - | - | - |
| 0.6167 | 27600 | 0.5444 | - | - | - | - | - |
| 0.6212 | 27800 | 0.547 | - | - | - | - | - |
| 0.6257 | 28000 | 0.5519 | - | - | - | - | - |
| 0.6301 | 28200 | 0.5422 | - | - | - | - | - |
| 0.6346 | 28400 | 0.5461 | - | - | - | - | - |
| 0.6391 | 28600 | 0.5511 | - | - | - | - | - |
| 0.6435 | 28800 | 0.5443 | - | - | - | - | - |
| 0.6480 | 29000 | 0.5408 | - | - | - | - | - |
| 0.6525 | 29200 | 0.548 | - | - | - | - | - |
| 0.6569 | 29400 | 0.5414 | - | - | - | - | - |
| 0.6614 | 29600 | 0.5409 | - | - | - | - | - |
| 0.6659 | 29800 | 0.5365 | - | - | - | - | - |
| 0.6703 | 30000 | 0.5363 | - | - | - | - | - |
| 0.6748 | 30200 | 0.5335 | - | - | - | - | - |
| 0.6793 | 30400 | 0.5413 | - | - | - | - | - |
| 0.6838 | 30600 | 0.5357 | - | - | - | - | - |
| 0.6882 | 30800 | 0.5376 | - | - | - | - | - |
| 0.6927 | 31000 | 0.539 | - | - | - | - | - |
| 0.6972 | 31200 | 0.5265 | - | - | - | - | - |
| 0.7016 | 31400 | 0.5267 | - | - | - | - | - |
| 0.7061 | 31600 | 0.5335 | - | - | - | - | - |
| 0.7106 | 31800 | 0.5471 | - | - | - | - | - |
| 0.7150 | 32000 | 0.5309 | - | - | - | - | - |
| 0.7195 | 32200 | 0.5348 | - | - | - | - | - |
| 0.7240 | 32400 | 0.5147 | - | - | - | - | - |
| 0.7284 | 32600 | 0.5254 | - | - | - | - | - |
| 0.7329 | 32800 | 0.5276 | - | - | - | - | - |
| 0.7374 | 33000 | 0.5236 | - | - | - | - | - |
| 0.7418 | 33200 | 0.5353 | - | - | - | - | - |
| 0.7463 | 33400 | 0.5286 | - | - | - | - | - |
| 0.7508 | 33600 | 0.5269 | - | - | - | - | - |
| 0.7553 | 33800 | 0.5326 | - | - | - | - | - |
| 0.7597 | 34000 | 0.5205 | - | - | - | - | - |
| 0.7642 | 34200 | 0.5225 | - | - | - | - | - |
| 0.7687 | 34400 | 0.523 | - | - | - | - | - |
| 0.7731 | 34600 | 0.5293 | - | - | - | - | - |
| 0.7776 | 34800 | 0.5174 | - | - | - | - | - |
| 0.7821 | 35000 | 0.5237 | - | - | - | - | - |
| 0.7865 | 35200 | 0.5137 | - | - | - | - | - |
| 0.7910 | 35400 | 0.5255 | - | - | - | - | - |
| 0.7955 | 35600 | 0.5285 | - | - | - | - | - |
| 0.7999 | 35800 | 0.5213 | - | - | - | - | - |
| 0.8044 | 36000 | 0.5156 | - | - | - | - | - |
| 0.8089 | 36200 | 0.5218 | - | - | - | - | - |
| 0.8134 | 36400 | 0.5163 | - | - | - | - | - |
| 0.8178 | 36600 | 0.515 | - | - | - | - | - |
| 0.8223 | 36800 | 0.5099 | - | - | - | - | - |
| 0.8268 | 37000 | 0.5107 | - | - | - | - | - |
| 0.8312 | 37200 | 0.51 | - | - | - | - | - |
| 0.8357 | 37400 | 0.5108 | - | - | - | - | - |
| 0.8402 | 37600 | 0.5101 | - | - | - | - | - |
| 0.8446 | 37800 | 0.5125 | - | - | - | - | - |
| 0.8491 | 38000 | 0.5194 | - | - | - | - | - |
| 0.8536 | 38200 | 0.5125 | - | - | - | - | - |
| 0.8580 | 38400 | 0.5168 | - | - | - | - | - |
| 0.8625 | 38600 | 0.5183 | - | - | - | - | - |
| 0.8670 | 38800 | 0.5112 | - | - | - | - | - |
| 0.8714 | 39000 | 0.5121 | - | - | - | - | - |
| 0.8759 | 39200 | 0.5045 | - | - | - | - | - |
| 0.8804 | 39400 | 0.5095 | - | - | - | - | - |
| 0.8849 | 39600 | 0.4999 | - | - | - | - | - |
| 0.8893 | 39800 | 0.502 | - | - | - | - | - |
| 0.8938 | 40000 | 0.5005 | - | - | - | - | - |
| 0.8983 | 40200 | 0.5057 | - | - | - | - | - |
| 0.9027 | 40400 | 0.5 | - | - | - | - | - |
| 0.9072 | 40600 | 0.5081 | - | - | - | - | - |
| 0.9117 | 40800 | 0.5042 | - | - | - | - | - |
| 0.9161 | 41000 | 0.5006 | - | - | - | - | - |
| 0.9206 | 41200 | 0.512 | - | - | - | - | - |
| 0.9251 | 41400 | 0.5061 | - | - | - | - | - |
| 0.9295 | 41600 | 0.5056 | - | - | - | - | - |
| 0.9340 | 41800 | 0.5069 | - | - | - | - | - |
| 0.9385 | 42000 | 0.5018 | - | - | - | - | - |
| 0.9430 | 42200 | 0.5055 | - | - | - | - | - |
| 0.9474 | 42400 | 0.4955 | - | - | - | - | - |
| 0.9519 | 42600 | 0.4871 | - | - | - | - | - |
| 0.9564 | 42800 | 0.5031 | - | - | - | - | - |
| 0.9608 | 43000 | 0.4969 | - | - | - | - | - |
| 0.9653 | 43200 | 0.4957 | - | - | - | - | - |
| 0.9698 | 43400 | 0.5037 | - | - | - | - | - |
| 0.9742 | 43600 | 0.5066 | - | - | - | - | - |
| 0.9787 | 43800 | 0.4944 | - | - | - | - | - |
| 0.9832 | 44000 | 0.4982 | - | - | - | - | - |
| 0.9876 | 44200 | 0.5004 | - | - | - | - | - |
| 0.9921 | 44400 | 0.4972 | - | - | - | - | - |
| 0.9966 | 44600 | 0.4964 | - | - | - | - | - |
| 1.0011 | 44800 | 0.4917 | - | - | - | - | - |
| 1.0055 | 45000 | 0.4892 | - | - | - | - | - |
| 1.0100 | 45200 | 0.4774 | - | - | - | - | - |
| 1.0145 | 45400 | 0.4784 | - | - | - | - | - |
| 1.0189 | 45600 | 0.4782 | - | - | - | - | - |
| 1.0234 | 45800 | 0.4793 | - | - | - | - | - |
| 1.0279 | 46000 | 0.4846 | - | - | - | - | - |
| 1.0323 | 46200 | 0.4746 | - | - | - | - | - |
| 1.0368 | 46400 | 0.4748 | - | - | - | - | - |
| 1.0413 | 46600 | 0.481 | - | - | - | - | - |
| 1.0457 | 46800 | 0.4817 | - | - | - | - | - |
| 1.0502 | 47000 | 0.4825 | - | - | - | - | - |
| 1.0547 | 47200 | 0.4739 | - | - | - | - | - |
| 1.0591 | 47400 | 0.4752 | - | - | - | - | - |
| 1.0636 | 47600 | 0.4745 | - | - | - | - | - |
| 1.0681 | 47800 | 0.4686 | - | - | - | - | - |
| 1.0726 | 48000 | 0.4868 | - | - | - | - | - |
| 1.0770 | 48200 | 0.4713 | - | - | - | - | - |
| 1.0815 | 48400 | 0.4685 | - | - | - | - | - |
| 1.0860 | 48600 | 0.4768 | - | - | - | - | - |
| 1.0904 | 48800 | 0.4761 | - | - | - | - | - |
| 1.0949 | 49000 | 0.4811 | - | - | - | - | - |
| 1.0994 | 49200 | 0.4746 | - | - | - | - | - |
| 1.1038 | 49400 | 0.4751 | - | - | - | - | - |
| 1.1083 | 49600 | 0.479 | - | - | - | - | - |
| 1.1128 | 49800 | 0.4759 | - | - | - | - | - |
| 1.1172 | 50000 | 0.4689 | - | - | - | - | - |
| 1.1217 | 50200 | 0.467 | - | - | - | - | - |
| 1.1262 | 50400 | 0.4716 | - | - | - | - | - |
| 1.1307 | 50600 | 0.4672 | - | - | - | - | - |
| 1.1351 | 50800 | 0.4681 | - | - | - | - | - |
| 1.1396 | 51000 | 0.4697 | - | - | - | - | - |
| 1.1441 | 51200 | 0.4685 | - | - | - | - | - |
| 1.1485 | 51400 | 0.4716 | - | - | - | - | - |
| 1.1530 | 51600 | 0.4716 | - | - | - | - | - |
| 1.1575 | 51800 | 0.4785 | - | - | - | - | - |
| 1.1619 | 52000 | 0.4631 | - | - | - | - | - |
| 1.1664 | 52200 | 0.4683 | - | - | - | - | - |
| 1.1709 | 52400 | 0.4697 | - | - | - | - | - |
| 1.1753 | 52600 | 0.464 | - | - | - | - | - |
| 1.1798 | 52800 | 0.4717 | - | - | - | - | - |
| 1.1843 | 53000 | 0.4672 | - | - | - | - | - |
| 1.1887 | 53200 | 0.4607 | - | - | - | - | - |
| 1.1932 | 53400 | 0.464 | - | - | - | - | - |
| 1.1977 | 53600 | 0.4705 | - | - | - | - | - |
| 1.2022 | 53800 | 0.4657 | - | - | - | - | - |
| 1.2066 | 54000 | 0.4665 | - | - | - | - | - |
| 1.2111 | 54200 | 0.4684 | - | - | - | - | - |
| 1.2156 | 54400 | 0.4644 | - | - | - | - | - |
| 1.2200 | 54600 | 0.4695 | - | - | - | - | - |
| 1.2245 | 54800 | 0.4629 | - | - | - | - | - |
| 1.2290 | 55000 | 0.4677 | - | - | - | - | - |
| 1.2334 | 55200 | 0.4627 | - | - | - | - | - |
| 1.2379 | 55400 | 0.463 | - | - | - | - | - |
| 1.2424 | 55600 | 0.4643 | - | - | - | - | - |
| 1.2468 | 55800 | 0.4612 | - | - | - | - | - |
| 1.2513 | 56000 | 0.4637 | - | - | - | - | - |
| 1.2558 | 56200 | 0.4614 | - | - | - | - | - |
| 1.2603 | 56400 | 0.4634 | - | - | - | - | - |
| 1.2647 | 56600 | 0.471 | - | - | - | - | - |
| 1.2692 | 56800 | 0.4622 | - | - | - | - | - |
| 1.2737 | 57000 | 0.4644 | - | - | - | - | - |
| 1.2781 | 57200 | 0.4643 | - | - | - | - | - |
| 1.2826 | 57400 | 0.4624 | - | - | - | - | - |
| 1.2871 | 57600 | 0.4598 | - | - | - | - | - |
| 1.2915 | 57800 | 0.4617 | - | - | - | - | - |
| 1.2960 | 58000 | 0.4618 | - | - | - | - | - |
| 1.3005 | 58200 | 0.4679 | - | - | - | - | - |
| 1.3049 | 58400 | 0.4604 | - | - | - | - | - |
| 1.3094 | 58600 | 0.4724 | - | - | - | - | - |
| 1.3139 | 58800 | 0.462 | - | - | - | - | - |
| 1.3183 | 59000 | 0.4569 | - | - | - | - | - |
| 1.3228 | 59200 | 0.4645 | - | - | - | - | - |
| 1.3273 | 59400 | 0.4565 | - | - | - | - | - |
| 1.3318 | 59600 | 0.4657 | - | - | - | - | - |
| 1.3362 | 59800 | 0.455 | - | - | - | - | - |
| 1.3407 | 60000 | 0.466 | - | - | - | - | - |
| 1.3452 | 60200 | 0.4708 | - | - | - | - | - |
| 1.3496 | 60400 | 0.4579 | - | - | - | - | - |
| 1.3541 | 60600 | 0.4516 | - | - | - | - | - |
| 1.3586 | 60800 | 0.4571 | - | - | - | - | - |
| 1.3630 | 61000 | 0.4486 | - | - | - | - | - |
| 1.3675 | 61200 | 0.4631 | - | - | - | - | - |
| 1.3720 | 61400 | 0.4656 | - | - | - | - | - |
| 1.3764 | 61600 | 0.4594 | - | - | - | - | - |
| 1.3809 | 61800 | 0.4609 | - | - | - | - | - |
| 1.3854 | 62000 | 0.4577 | - | - | - | - | - |
| 1.3899 | 62200 | 0.4578 | - | - | - | - | - |
| 1.3943 | 62400 | 0.4497 | - | - | - | - | - |
| 1.3988 | 62600 | 0.456 | - | - | - | - | - |
| 1.4033 | 62800 | 0.4522 | - | - | - | - | - |
| 1.4077 | 63000 | 0.4594 | - | - | - | - | - |
| 1.4122 | 63200 | 0.4503 | - | - | - | - | - |
| 1.4167 | 63400 | 0.4536 | - | - | - | - | - |
| 1.4211 | 63600 | 0.4607 | - | - | - | - | - |
| 1.4256 | 63800 | 0.4541 | - | - | - | - | - |
| 1.4301 | 64000 | 0.446 | - | - | - | - | - |
| 1.4345 | 64200 | 0.4518 | - | - | - | - | - |
| 1.4390 | 64400 | 0.4586 | - | - | - | - | - |
| 1.4435 | 64600 | 0.448 | - | - | - | - | - |
| 1.4479 | 64800 | 0.459 | - | - | - | - | - |
| 1.4524 | 65000 | 0.4515 | - | - | - | - | - |
| 1.4569 | 65200 | 0.4496 | - | - | - | - | - |
| 1.4614 | 65400 | 0.4581 | - | - | - | - | - |
| 1.4658 | 65600 | 0.4527 | - | - | - | - | - |
| 1.4703 | 65800 | 0.4498 | - | - | - | - | - |
| 1.4748 | 66000 | 0.456 | - | - | - | - | - |
| 1.4792 | 66200 | 0.4484 | - | - | - | - | - |
| 1.4837 | 66400 | 0.4447 | - | - | - | - | - |
| 1.4882 | 66600 | 0.4603 | - | - | - | - | - |
| 1.4926 | 66800 | 0.4492 | - | - | - | - | - |
| 1.4971 | 67000 | 0.4469 | - | - | - | - | - |
| 1.5016 | 67200 | 0.4559 | - | - | - | - | - |
| 1.5060 | 67400 | 0.4449 | - | - | - | - | - |
| 1.5105 | 67600 | 0.4399 | - | - | - | - | - |
| 1.5150 | 67800 | 0.458 | - | - | - | - | - |
| 1.5195 | 68000 | 0.4502 | - | - | - | - | - |
| 1.5239 | 68200 | 0.4503 | - | - | - | - | - |
| 1.5284 | 68400 | 0.4511 | - | - | - | - | - |
| 1.5329 | 68600 | 0.4418 | - | - | - | - | - |
| 1.5373 | 68800 | 0.4437 | - | - | - | - | - |
| 1.5418 | 69000 | 0.4444 | - | - | - | - | - |
| 1.5463 | 69200 | 0.4531 | - | - | - | - | - |
| 1.5507 | 69400 | 0.4488 | - | - | - | - | - |
| 1.5552 | 69600 | 0.4377 | - | - | - | - | - |
| 1.5597 | 69800 | 0.4547 | - | - | - | - | - |
| 1.5641 | 70000 | 0.4538 | - | - | - | - | - |
| 1.5686 | 70200 | 0.4516 | - | - | - | - | - |
| 1.5731 | 70400 | 0.4495 | - | - | - | - | - |
| 1.5775 | 70600 | 0.4482 | - | - | - | - | - |
| 1.5820 | 70800 | 0.4466 | - | - | - | - | - |
| 1.5865 | 71000 | 0.4449 | - | - | - | - | - |
| 1.5910 | 71200 | 0.4497 | - | - | - | - | - |
| 1.5954 | 71400 | 0.4448 | - | - | - | - | - |
| 1.5999 | 71600 | 0.4508 | - | - | - | - | - |
| 1.6044 | 71800 | 0.4463 | - | - | - | - | - |
| 1.6088 | 72000 | 0.4416 | - | - | - | - | - |
| 1.6133 | 72200 | 0.4509 | - | - | - | - | - |
| 1.6178 | 72400 | 0.4356 | - | - | - | - | - |
| 1.6222 | 72600 | 0.4476 | - | - | - | - | - |
| 1.6267 | 72800 | 0.4456 | - | - | - | - | - |
| 1.6312 | 73000 | 0.4409 | - | - | - | - | - |
| 1.6356 | 73200 | 0.444 | - | - | - | - | - |
| 1.6401 | 73400 | 0.4389 | - | - | - | - | - |
| 1.6446 | 73600 | 0.4459 | - | - | - | - | - |
| 1.6491 | 73800 | 0.4416 | - | - | - | - | - |
| 1.6535 | 74000 | 0.4278 | - | - | - | - | - |
| 1.6580 | 74200 | 0.4436 | - | - | - | - | - |
| 1.6625 | 74400 | 0.4476 | - | - | - | - | - |
| 1.6669 | 74600 | 0.4427 | - | - | - | - | - |
| 1.6714 | 74800 | 0.4513 | - | - | - | - | - |
| 1.6759 | 75000 | 0.4412 | - | - | - | - | - |
| 1.6803 | 75200 | 0.448 | - | - | - | - | - |
| 1.6848 | 75400 | 0.4454 | - | - | - | - | - |
| 1.6893 | 75600 | 0.438 | - | - | - | - | - |
| 1.6937 | 75800 | 0.4385 | - | - | - | - | - |
| 1.6982 | 76000 | 0.4381 | - | - | - | - | - |
| 1.7027 | 76200 | 0.4409 | - | - | - | - | - |
| 1.7071 | 76400 | 0.443 | - | - | - | - | - |
| 1.7116 | 76600 | 0.4437 | - | - | - | - | - |
| 1.7161 | 76800 | 0.4477 | - | - | - | - | - |
| 1.7206 | 77000 | 0.4486 | - | - | - | - | - |
| 1.7250 | 77200 | 0.4535 | - | - | - | - | - |
| 1.7295 | 77400 | 0.4451 | - | - | - | - | - |
| 1.7340 | 77600 | 0.4422 | - | - | - | - | - |
| 1.7384 | 77800 | 0.4463 | - | - | - | - | - |
| 1.7429 | 78000 | 0.4472 | - | - | - | - | - |
| 1.7474 | 78200 | 0.435 | - | - | - | - | - |
| 1.7518 | 78400 | 0.4426 | - | - | - | - | - |
| 1.7563 | 78600 | 0.4494 | - | - | - | - | - |
| 1.7608 | 78800 | 0.444 | - | - | - | - | - |
| 1.7652 | 79000 | 0.4423 | - | - | - | - | - |
| 1.7697 | 79200 | 0.4421 | - | - | - | - | - |
| 1.7742 | 79400 | 0.4404 | - | - | - | - | - |
| 1.7787 | 79600 | 0.4381 | - | - | - | - | - |
| 1.7831 | 79800 | 0.4472 | - | - | - | - | - |
| 1.7876 | 80000 | 0.4369 | 0.5021 (+0.1925) | 0.4367 (-0.1037) | 0.3578 (+0.0328) | 0.4330 (-0.0676) | 0.4092 (-0.0462) |
| 1.7921 | 80200 | 0.4421 | - | - | - | - | - |
| 1.7965 | 80400 | 0.4377 | - | - | - | - | - |
| 1.8010 | 80600 | 0.4452 | - | - | - | - | - |
| 1.8055 | 80800 | 0.4479 | - | - | - | - | - |
| 1.8099 | 81000 | 0.4352 | - | - | - | - | - |
| 1.8144 | 81200 | 0.4381 | - | - | - | - | - |
| 1.8189 | 81400 | 0.4327 | - | - | - | - | - |
| 1.8233 | 81600 | 0.4325 | - | - | - | - | - |
| 1.8278 | 81800 | 0.4379 | - | - | - | - | - |
| 1.8323 | 82000 | 0.4432 | - | - | - | - | - |
| 1.8367 | 82200 | 0.4362 | - | - | - | - | - |
| 1.8412 | 82400 | 0.45 | - | - | - | - | - |
| 1.8457 | 82600 | 0.4356 | - | - | - | - | - |
| 1.8502 | 82800 | 0.4339 | - | - | - | - | - |
| 1.8546 | 83000 | 0.4386 | - | - | - | - | - |
| 1.8591 | 83200 | 0.4478 | - | - | - | - | - |
| 1.8636 | 83400 | 0.432 | - | - | - | - | - |
| 1.8680 | 83600 | 0.4334 | - | - | - | - | - |
| 1.8725 | 83800 | 0.4394 | - | - | - | - | - |
| 1.8770 | 84000 | 0.448 | - | - | - | - | - |
| 1.8814 | 84200 | 0.4374 | - | - | - | - | - |
| 1.8859 | 84400 | 0.4355 | - | - | - | - | - |
| 1.8904 | 84600 | 0.4436 | - | - | - | - | - |
| 1.8948 | 84800 | 0.4334 | - | - | - | - | - |
| 1.8993 | 85000 | 0.4301 | - | - | - | - | - |
| 1.9038 | 85200 | 0.4362 | - | - | - | - | - |
| 1.9083 | 85400 | 0.4407 | - | - | - | - | - |
| 1.9127 | 85600 | 0.4336 | - | - | - | - | - |
| 1.9172 | 85800 | 0.4241 | - | - | - | - | - |
| 1.9217 | 86000 | 0.4271 | - | - | - | - | - |
| 1.9261 | 86200 | 0.4312 | - | - | - | - | - |
| 1.9306 | 86400 | 0.4345 | - | - | - | - | - |
| 1.9351 | 86600 | 0.431 | - | - | - | - | - |
| 1.9395 | 86800 | 0.4326 | - | - | - | - | - |
| 1.9440 | 87000 | 0.4228 | - | - | - | - | - |
| 1.9485 | 87200 | 0.4307 | - | - | - | - | - |
| 1.9529 | 87400 | 0.436 | - | - | - | - | - |
| 1.9574 | 87600 | 0.4321 | - | - | - | - | - |
| 1.9619 | 87800 | 0.4229 | - | - | - | - | - |
| 1.9663 | 88000 | 0.4296 | - | - | - | - | - |
| 1.9708 | 88200 | 0.4338 | - | - | - | - | - |
| 1.9753 | 88400 | 0.4337 | - | - | - | - | - |
| 1.9798 | 88600 | 0.426 | - | - | - | - | - |
| 1.9842 | 88800 | 0.4212 | - | - | - | - | - |
| 1.9887 | 89000 | 0.4279 | - | - | - | - | - |
| 1.9932 | 89200 | 0.4251 | - | - | - | - | - |
| 1.9976 | 89400 | 0.4197 | - | - | - | - | - |
| 2.0021 | 89600 | 0.4167 | - | - | - | - | - |
| 2.0066 | 89800 | 0.412 | - | - | - | - | - |
| 2.0110 | 90000 | 0.4059 | - | - | - | - | - |
| 2.0155 | 90200 | 0.4085 | - | - | - | - | - |
| 2.0200 | 90400 | 0.4198 | - | - | - | - | - |
| 2.0244 | 90600 | 0.4093 | - | - | - | - | - |
| 2.0289 | 90800 | 0.4006 | - | - | - | - | - |
| 2.0334 | 91000 | 0.4161 | - | - | - | - | - |
| 2.0379 | 91200 | 0.4149 | - | - | - | - | - |
| 2.0423 | 91400 | 0.4108 | - | - | - | - | - |
| 2.0468 | 91600 | 0.4085 | - | - | - | - | - |
| 2.0513 | 91800 | 0.4167 | - | - | - | - | - |
| 2.0557 | 92000 | 0.4148 | - | - | - | - | - |
| 2.0602 | 92200 | 0.4149 | - | - | - | - | - |
| 2.0647 | 92400 | 0.4127 | - | - | - | - | - |
| 2.0691 | 92600 | 0.4108 | - | - | - | - | - |
| 2.0736 | 92800 | 0.4071 | - | - | - | - | - |
| 2.0781 | 93000 | 0.4199 | - | - | - | - | - |
| 2.0825 | 93200 | 0.4083 | - | - | - | - | - |
| 2.0870 | 93400 | 0.4015 | - | - | - | - | - |
| 2.0915 | 93600 | 0.4044 | - | - | - | - | - |
| 2.0959 | 93800 | 0.4108 | - | - | - | - | - |
| 2.1004 | 94000 | 0.4054 | - | - | - | - | - |
| 2.1049 | 94200 | 0.4197 | - | - | - | - | - |
| 2.1094 | 94400 | 0.4112 | - | - | - | - | - |
| 2.1138 | 94600 | 0.4108 | - | - | - | - | - |
| 2.1183 | 94800 | 0.4069 | - | - | - | - | - |
| 2.1228 | 95000 | 0.4117 | - | - | - | - | - |
| 2.1272 | 95200 | 0.4016 | - | - | - | - | - |
| 2.1317 | 95400 | 0.4074 | - | - | - | - | - |
| 2.1362 | 95600 | 0.4115 | - | - | - | - | - |
| 2.1406 | 95800 | 0.4039 | - | - | - | - | - |
| 2.1451 | 96000 | 0.4086 | - | - | - | - | - |
| 2.1496 | 96200 | 0.4054 | - | - | - | - | - |
| 2.1540 | 96400 | 0.4043 | - | - | - | - | - |
| 2.1585 | 96600 | 0.4064 | - | - | - | - | - |
| 2.1630 | 96800 | 0.402 | - | - | - | - | - |
| 2.1675 | 97000 | 0.4173 | - | - | - | - | - |
| 2.1719 | 97200 | 0.4022 | - | - | - | - | - |
| 2.1764 | 97400 | 0.4059 | - | - | - | - | - |
| 2.1809 | 97600 | 0.4092 | - | - | - | - | - |
| 2.1853 | 97800 | 0.4017 | - | - | - | - | - |
| 2.1898 | 98000 | 0.4183 | - | - | - | - | - |
| 2.1943 | 98200 | 0.4008 | - | - | - | - | - |
| 2.1987 | 98400 | 0.4075 | - | - | - | - | - |
| 2.2032 | 98600 | 0.4057 | - | - | - | - | - |
| 2.2077 | 98800 | 0.4054 | - | - | - | - | - |
| 2.2121 | 99000 | 0.4007 | - | - | - | - | - |
| 2.2166 | 99200 | 0.4054 | - | - | - | - | - |
| 2.2211 | 99400 | 0.4088 | - | - | - | - | - |
| 2.2255 | 99600 | 0.4074 | - | - | - | - | - |
| 2.2300 | 99800 | 0.3997 | - | - | - | - | - |
| 2.2345 | 100000 | 0.4007 | - | - | - | - | - |
| 2.2390 | 100200 | 0.4144 | - | - | - | - | - |
| 2.2434 | 100400 | 0.4093 | - | - | - | - | - |
| 2.2479 | 100600 | 0.3969 | - | - | - | - | - |
| 2.2524 | 100800 | 0.4079 | - | - | - | - | - |
| 2.2568 | 101000 | 0.4082 | - | - | - | - | - |
| 2.2613 | 101200 | 0.4076 | - | - | - | - | - |
| 2.2658 | 101400 | 0.4007 | - | - | - | - | - |
| 2.2702 | 101600 | 0.4045 | - | - | - | - | - |
| 2.2747 | 101800 | 0.4039 | - | - | - | - | - |
| 2.2792 | 102000 | 0.4089 | - | - | - | - | - |
| 2.2836 | 102200 | 0.4016 | - | - | - | - | - |
| 2.2881 | 102400 | 0.4118 | - | - | - | - | - |
| 2.2926 | 102600 | 0.4071 | - | - | - | - | - |
| 2.2971 | 102800 | 0.4074 | - | - | - | - | - |
| 2.3015 | 103000 | 0.4093 | - | - | - | - | - |
| 2.3060 | 103200 | 0.4043 | - | - | - | - | - |
| 2.3105 | 103400 | 0.4132 | - | - | - | - | - |
| 2.3149 | 103600 | 0.4084 | - | - | - | - | - |
| 2.3194 | 103800 | 0.4078 | - | - | - | - | - |
| 2.3239 | 104000 | 0.4029 | - | - | - | - | - |
| 2.3283 | 104200 | 0.3945 | - | - | - | - | - |
| 2.3328 | 104400 | 0.4047 | - | - | - | - | - |
| 2.3373 | 104600 | 0.4062 | - | - | - | - | - |
| 2.3417 | 104800 | 0.4154 | - | - | - | - | - |
| 2.3462 | 105000 | 0.4022 | - | - | - | - | - |
| 2.3507 | 105200 | 0.4068 | - | - | - | - | - |
| 2.3551 | 105400 | 0.3987 | - | - | - | - | - |
| 2.3596 | 105600 | 0.4018 | - | - | - | - | - |
| 2.3641 | 105800 | 0.3947 | - | - | - | - | - |
| 2.3686 | 106000 | 0.4102 | - | - | - | - | - |
| 2.3730 | 106200 | 0.402 | - | - | - | - | - |
| 2.3775 | 106400 | 0.4016 | - | - | - | - | - |
| 2.3820 | 106600 | 0.3982 | - | - | - | - | - |
| 2.3864 | 106800 | 0.4021 | - | - | - | - | - |
| 2.3909 | 107000 | 0.4134 | - | - | - | - | - |
| 2.3954 | 107200 | 0.4005 | - | - | - | - | - |
| 2.3998 | 107400 | 0.3993 | - | - | - | - | - |
| 2.4043 | 107600 | 0.4007 | - | - | - | - | - |
| 2.4088 | 107800 | 0.3983 | - | - | - | - | - |
| 2.4132 | 108000 | 0.4131 | - | - | - | - | - |
| 2.4177 | 108200 | 0.4021 | - | - | - | - | - |
| 2.4222 | 108400 | 0.4078 | - | - | - | - | - |
| 2.4267 | 108600 | 0.3991 | - | - | - | - | - |
| 2.4311 | 108800 | 0.4112 | - | - | - | - | - |
| 2.4356 | 109000 | 0.3965 | - | - | - | - | - |
| 2.4401 | 109200 | 0.3942 | - | - | - | - | - |
| 2.4445 | 109400 | 0.4043 | - | - | - | - | - |
| 2.4490 | 109600 | 0.4001 | - | - | - | - | - |
| 2.4535 | 109800 | 0.4033 | - | - | - | - | - |
| 2.4579 | 110000 | 0.4097 | - | - | - | - | - |
| 2.4624 | 110200 | 0.3999 | - | - | - | - | - |
| 2.4669 | 110400 | 0.4038 | - | - | - | - | - |
| 2.4713 | 110600 | 0.4091 | - | - | - | - | - |
| 2.4758 | 110800 | 0.4062 | - | - | - | - | - |
| 2.4803 | 111000 | 0.4015 | - | - | - | - | - |
| 2.4847 | 111200 | 0.3969 | - | - | - | - | - |
| 2.4892 | 111400 | 0.4044 | - | - | - | - | - |
| 2.4937 | 111600 | 0.404 | - | - | - | - | - |
| 2.4982 | 111800 | 0.4003 | - | - | - | - | - |
| 2.5026 | 112000 | 0.3996 | - | - | - | - | - |
| 2.5071 | 112200 | 0.4039 | - | - | - | - | - |
| 2.5116 | 112400 | 0.4054 | - | - | - | - | - |
| 2.5160 | 112600 | 0.4041 | - | - | - | - | - |
| 2.5205 | 112800 | 0.4039 | - | - | - | - | - |
| 2.5250 | 113000 | 0.3935 | - | - | - | - | - |
| 2.5294 | 113200 | 0.4098 | - | - | - | - | - |
| 2.5339 | 113400 | 0.3955 | - | - | - | - | - |
| 2.5384 | 113600 | 0.3939 | - | - | - | - | - |
| 2.5428 | 113800 | 0.3986 | - | - | - | - | - |
| 2.5473 | 114000 | 0.3927 | - | - | - | - | - |
| 2.5518 | 114200 | 0.3989 | - | - | - | - | - |
| 2.5563 | 114400 | 0.4011 | - | - | - | - | - |
| 2.5607 | 114600 | 0.3993 | - | - | - | - | - |
| 2.5652 | 114800 | 0.4006 | - | - | - | - | - |
| 2.5697 | 115000 | 0.4026 | - | - | - | - | - |
| 2.5741 | 115200 | 0.3936 | - | - | - | - | - |
| 2.5786 | 115400 | 0.4029 | - | - | - | - | - |
| 2.5831 | 115600 | 0.4078 | - | - | - | - | - |
| 2.5875 | 115800 | 0.4026 | - | - | - | - | - |
| 2.5920 | 116000 | 0.3987 | - | - | - | - | - |
| 2.5965 | 116200 | 0.4067 | - | - | - | - | - |
| 2.6009 | 116400 | 0.3952 | - | - | - | - | - |
| 2.6054 | 116600 | 0.3915 | - | - | - | - | - |
| 2.6099 | 116800 | 0.4019 | - | - | - | - | - |
| 2.6143 | 117000 | 0.4038 | - | - | - | - | - |
| 2.6188 | 117200 | 0.3982 | - | - | - | - | - |
| 2.6233 | 117400 | 0.3972 | - | - | - | - | - |
| 2.6278 | 117600 | 0.3969 | - | - | - | - | - |
| 2.6322 | 117800 | 0.4004 | - | - | - | - | - |
| 2.6367 | 118000 | 0.3998 | - | - | - | - | - |
| 2.6412 | 118200 | 0.402 | - | - | - | - | - |
| 2.6456 | 118400 | 0.3958 | - | - | - | - | - |
| 2.6501 | 118600 | 0.4061 | - | - | - | - | - |
| 2.6546 | 118800 | 0.3983 | - | - | - | - | - |
| 2.6590 | 119000 | 0.3952 | - | - | - | - | - |
| 2.6635 | 119200 | 0.3995 | - | - | - | - | - |
| 2.6680 | 119400 | 0.3949 | - | - | - | - | - |
| 2.6724 | 119600 | 0.4066 | - | - | - | - | - |
| 2.6769 | 119800 | 0.4058 | - | - | - | - | - |
| 2.6814 | 120000 | 0.3977 | - | - | - | - | - |
| 2.6859 | 120200 | 0.3945 | - | - | - | - | - |
| 2.6903 | 120400 | 0.3919 | - | - | - | - | - |
| 2.6948 | 120600 | 0.394 | - | - | - | - | - |
| 2.6993 | 120800 | 0.4034 | - | - | - | - | - |
| 2.7037 | 121000 | 0.3941 | - | - | - | - | - |
| 2.7082 | 121200 | 0.4006 | - | - | - | - | - |
| 2.7127 | 121400 | 0.4087 | - | - | - | - | - |
| 2.7171 | 121600 | 0.3902 | - | - | - | - | - |
| 2.7216 | 121800 | 0.3959 | - | - | - | - | - |
| 2.7261 | 122000 | 0.3927 | - | - | - | - | - |
| 2.7305 | 122200 | 0.3995 | - | - | - | - | - |
| 2.7350 | 122400 | 0.3982 | - | - | - | - | - |
| 2.7395 | 122600 | 0.3961 | - | - | - | - | - |
| 2.7440 | 122800 | 0.3996 | - | - | - | - | - |
| 2.7484 | 123000 | 0.3934 | - | - | - | - | - |
| 2.7529 | 123200 | 0.3959 | - | - | - | - | - |
| 2.7574 | 123400 | 0.393 | - | - | - | - | - |
| 2.7618 | 123600 | 0.3894 | - | - | - | - | - |
| 2.7663 | 123800 | 0.3925 | - | - | - | - | - |
| 2.7708 | 124000 | 0.3962 | - | - | - | - | - |
| 2.7752 | 124200 | 0.4018 | - | - | - | - | - |
| 2.7797 | 124400 | 0.3931 | - | - | - | - | - |
| 2.7842 | 124600 | 0.4 | - | - | - | - | - |
| 2.7886 | 124800 | 0.3967 | - | - | - | - | - |
| 2.7931 | 125000 | 0.3934 | - | - | - | - | - |
| 2.7976 | 125200 | 0.3945 | - | - | - | - | - |
| 2.8020 | 125400 | 0.3925 | - | - | - | - | - |
| 2.8065 | 125600 | 0.3982 | - | - | - | - | - |
| 2.8110 | 125800 | 0.4017 | - | - | - | - | - |
| 2.8155 | 126000 | 0.3971 | - | - | - | - | - |
| 2.8199 | 126200 | 0.3996 | - | - | - | - | - |
| 2.8244 | 126400 | 0.3992 | - | - | - | - | - |
| 2.8289 | 126600 | 0.3941 | - | - | - | - | - |
| 2.8333 | 126800 | 0.387 | - | - | - | - | - |
| 2.8378 | 127000 | 0.4012 | - | - | - | - | - |
| 2.8423 | 127200 | 0.3965 | - | - | - | - | - |
| 2.8467 | 127400 | 0.399 | - | - | - | - | - |
| 2.8512 | 127600 | 0.4007 | - | - | - | - | - |
| 2.8557 | 127800 | 0.3916 | - | - | - | - | - |
| 2.8601 | 128000 | 0.3976 | - | - | - | - | - |
| 2.8646 | 128200 | 0.3975 | - | - | - | - | - |
| 2.8691 | 128400 | 0.4022 | - | - | - | - | - |
| 2.8736 | 128600 | 0.4089 | - | - | - | - | - |
| 2.8780 | 128800 | 0.3981 | - | - | - | - | - |
| 2.8825 | 129000 | 0.3906 | - | - | - | - | - |
| 2.8870 | 129200 | 0.3961 | - | - | - | - | - |
| 2.8914 | 129400 | 0.4014 | - | - | - | - | - |
| 2.8959 | 129600 | 0.396 | - | - | - | - | - |
| 2.9004 | 129800 | 0.3978 | - | - | - | - | - |
| 2.9048 | 130000 | 0.398 | - | - | - | - | - |
| 2.9093 | 130200 | 0.3871 | - | - | - | - | - |
| 2.9138 | 130400 | 0.3913 | - | - | - | - | - |
| 2.9182 | 130600 | 0.3899 | - | - | - | - | - |
| 2.9227 | 130800 | 0.3912 | - | - | - | - | - |
| 2.9272 | 131000 | 0.3849 | - | - | - | - | - |
| 2.9316 | 131200 | 0.3936 | - | - | - | - | - |
| 2.9361 | 131400 | 0.3976 | - | - | - | - | - |
| 2.9406 | 131600 | 0.3941 | - | - | - | - | - |
| 2.9451 | 131800 | 0.3974 | - | - | - | - | - |
| 2.9495 | 132000 | 0.3885 | - | - | - | - | - |
| 2.9540 | 132200 | 0.3879 | - | - | - | - | - |
| 2.9585 | 132400 | 0.3988 | - | - | - | - | - |
| 2.9629 | 132600 | 0.3947 | - | - | - | - | - |
| 2.9674 | 132800 | 0.3991 | - | - | - | - | - |
| 2.9719 | 133000 | 0.3884 | - | - | - | - | - |
| 2.9763 | 133200 | 0.3934 | - | - | - | - | - |
| 2.9808 | 133400 | 0.3989 | - | - | - | - | - |
| 2.9853 | 133600 | 0.3942 | - | - | - | - | - |
| 2.9897 | 133800 | 0.3943 | - | - | - | - | - |
| 2.9942 | 134000 | 0.3951 | - | - | - | - | - |
| 2.9987 | 134200 | 0.4002 | - | - | - | - | - |
| 3.0032 | 134400 | 0.3819 | - | - | - | - | - |
| 3.0076 | 134600 | 0.3727 | - | - | - | - | - |
| 3.0121 | 134800 | 0.3704 | - | - | - | - | - |
| 3.0166 | 135000 | 0.3762 | - | - | - | - | - |
| 3.0210 | 135200 | 0.3735 | - | - | - | - | - |
| 3.0255 | 135400 | 0.3673 | - | - | - | - | - |
| 3.0300 | 135600 | 0.3708 | - | - | - | - | - |
| 3.0344 | 135800 | 0.3703 | - | - | - | - | - |
| 3.0389 | 136000 | 0.3789 | - | - | - | - | - |
| 3.0434 | 136200 | 0.3765 | - | - | - | - | - |
| 3.0478 | 136400 | 0.3658 | - | - | - | - | - |
| 3.0523 | 136600 | 0.3762 | - | - | - | - | - |
| 3.0568 | 136800 | 0.375 | - | - | - | - | - |
| 3.0612 | 137000 | 0.3715 | - | - | - | - | - |
| 3.0657 | 137200 | 0.3812 | - | - | - | - | - |
| 3.0702 | 137400 | 0.3744 | - | - | - | - | - |
| 3.0747 | 137600 | 0.3737 | - | - | - | - | - |
| 3.0791 | 137800 | 0.3788 | - | - | - | - | - |
| 3.0836 | 138000 | 0.3693 | - | - | - | - | - |
| 3.0881 | 138200 | 0.3784 | - | - | - | - | - |
| 3.0925 | 138400 | 0.3695 | - | - | - | - | - |
| 3.0970 | 138600 | 0.374 | - | - | - | - | - |
| 3.1015 | 138800 | 0.3679 | - | - | - | - | - |
| 3.1059 | 139000 | 0.3764 | - | - | - | - | - |
| 3.1104 | 139200 | 0.3696 | - | - | - | - | - |
| 3.1149 | 139400 | 0.3756 | - | - | - | - | - |
| 3.1193 | 139600 | 0.3707 | - | - | - | - | - |
| 3.1238 | 139800 | 0.3763 | - | - | - | - | - |
| 3.1283 | 140000 | 0.3721 | - | - | - | - | - |
| 3.1328 | 140200 | 0.3732 | - | - | - | - | - |
| 3.1372 | 140400 | 0.3745 | - | - | - | - | - |
| 3.1417 | 140600 | 0.3655 | - | - | - | - | - |
| 3.1462 | 140800 | 0.3695 | - | - | - | - | - |
| 3.1506 | 141000 | 0.3695 | - | - | - | - | - |
| 3.1551 | 141200 | 0.3725 | - | - | - | - | - |
| 3.1596 | 141400 | 0.3696 | - | - | - | - | - |
| 3.1640 | 141600 | 0.3751 | - | - | - | - | - |
| 3.1685 | 141800 | 0.3802 | - | - | - | - | - |
| 3.1730 | 142000 | 0.3787 | - | - | - | - | - |
| 3.1774 | 142200 | 0.3733 | - | - | - | - | - |
| 3.1819 | 142400 | 0.367 | - | - | - | - | - |
| 3.1864 | 142600 | 0.3649 | - | - | - | - | - |
| 3.1908 | 142800 | 0.3703 | - | - | - | - | - |
| 3.1953 | 143000 | 0.3774 | - | - | - | - | - |
| 3.1998 | 143200 | 0.3809 | - | - | - | - | - |
| 3.2043 | 143400 | 0.3692 | - | - | - | - | - |
| 3.2087 | 143600 | 0.3726 | - | - | - | - | - |
| 3.2132 | 143800 | 0.3703 | - | - | - | - | - |
| 3.2177 | 144000 | 0.3718 | - | - | - | - | - |
| 3.2221 | 144200 | 0.3738 | - | - | - | - | - |
| 3.2266 | 144400 | 0.3793 | - | - | - | - | - |
| 3.2311 | 144600 | 0.3692 | - | - | - | - | - |
| 3.2355 | 144800 | 0.371 | - | - | - | - | - |
| 3.2400 | 145000 | 0.373 | - | - | - | - | - |
| 3.2445 | 145200 | 0.3771 | - | - | - | - | - |
| 3.2489 | 145400 | 0.3775 | - | - | - | - | - |
| 3.2534 | 145600 | 0.3732 | - | - | - | - | - |
| 3.2579 | 145800 | 0.3784 | - | - | - | - | - |
| 3.2624 | 146000 | 0.3806 | - | - | - | - | - |
| 3.2668 | 146200 | 0.3723 | - | - | - | - | - |
| 3.2713 | 146400 | 0.38 | - | - | - | - | - |
| 3.2758 | 146600 | 0.3702 | - | - | - | - | - |
| 3.2802 | 146800 | 0.3715 | - | - | - | - | - |
| 3.2847 | 147000 | 0.371 | - | - | - | - | - |
| 3.2892 | 147200 | 0.3721 | - | - | - | - | - |
| 3.2936 | 147400 | 0.3824 | - | - | - | - | - |
| 3.2981 | 147600 | 0.3765 | - | - | - | - | - |
| 3.3026 | 147800 | 0.386 | - | - | - | - | - |
| 3.3070 | 148000 | 0.3777 | - | - | - | - | - |
| 3.3115 | 148200 | 0.3772 | - | - | - | - | - |
| 3.3160 | 148400 | 0.3717 | - | - | - | - | - |
| 3.3204 | 148600 | 0.3749 | - | - | - | - | - |
| 3.3249 | 148800 | 0.3743 | - | - | - | - | - |
| 3.3294 | 149000 | 0.3747 | - | - | - | - | - |
| 3.3339 | 149200 | 0.3691 | - | - | - | - | - |
| 3.3383 | 149400 | 0.3783 | - | - | - | - | - |
| 3.3428 | 149600 | 0.3717 | - | - | - | - | - |
| 3.3473 | 149800 | 0.375 | - | - | - | - | - |
| 3.3517 | 150000 | 0.38 | - | - | - | - | - |
| 3.3562 | 150200 | 0.3652 | - | - | - | - | - |
| 3.3607 | 150400 | 0.3742 | - | - | - | - | - |
| 3.3651 | 150600 | 0.3698 | - | - | - | - | - |
| 3.3696 | 150800 | 0.3743 | - | - | - | - | - |
| 3.3741 | 151000 | 0.372 | - | - | - | - | - |
| 3.3785 | 151200 | 0.3738 | - | - | - | - | - |
| 3.3830 | 151400 | 0.381 | - | - | - | - | - |
| 3.3875 | 151600 | 0.3743 | - | - | - | - | - |
| 3.3920 | 151800 | 0.3804 | - | - | - | - | - |
| 3.3964 | 152000 | 0.3681 | - | - | - | - | - |
| 3.4009 | 152200 | 0.3703 | - | - | - | - | - |
| 3.4054 | 152400 | 0.3659 | - | - | - | - | - |
| 3.4098 | 152600 | 0.3703 | - | - | - | - | - |
| 3.4143 | 152800 | 0.3778 | - | - | - | - | - |
| 3.4188 | 153000 | 0.3748 | - | - | - | - | - |
| 3.4232 | 153200 | 0.3845 | - | - | - | - | - |
| 3.4277 | 153400 | 0.379 | - | - | - | - | - |
| 3.4322 | 153600 | 0.3784 | - | - | - | - | - |
| 3.4366 | 153800 | 0.3715 | - | - | - | - | - |
| 3.4411 | 154000 | 0.3709 | - | - | - | - | - |
| 3.4456 | 154200 | 0.3778 | - | - | - | - | - |
| 3.4500 | 154400 | 0.3726 | - | - | - | - | - |
| 3.4545 | 154600 | 0.3714 | - | - | - | - | - |
| 3.4590 | 154800 | 0.3741 | - | - | - | - | - |
| 3.4635 | 155000 | 0.3763 | - | - | - | - | - |
| 3.4679 | 155200 | 0.3781 | - | - | - | - | - |
| 3.4724 | 155400 | 0.37 | - | - | - | - | - |
| 3.4769 | 155600 | 0.3745 | - | - | - | - | - |
| 3.4813 | 155800 | 0.3646 | - | - | - | - | - |
| 3.4858 | 156000 | 0.3718 | - | - | - | - | - |
| 3.4903 | 156200 | 0.379 | - | - | - | - | - |
| 3.4947 | 156400 | 0.3705 | - | - | - | - | - |
| 3.4992 | 156600 | 0.3759 | - | - | - | - | - |
| 3.5037 | 156800 | 0.3809 | - | - | - | - | - |
| 3.5081 | 157000 | 0.3716 | - | - | - | - | - |
| 3.5126 | 157200 | 0.3689 | - | - | - | - | - |
| 3.5171 | 157400 | 0.3671 | - | - | - | - | - |
| 3.5216 | 157600 | 0.3759 | - | - | - | - | - |
| 3.5260 | 157800 | 0.3722 | - | - | - | - | - |
| 3.5305 | 158000 | 0.3722 | - | - | - | - | - |
| 3.5350 | 158200 | 0.3664 | - | - | - | - | - |
| 3.5394 | 158400 | 0.3763 | - | - | - | - | - |
| 3.5439 | 158600 | 0.3759 | - | - | - | - | - |
| 3.5484 | 158800 | 0.3673 | - | - | - | - | - |
| 3.5528 | 159000 | 0.3715 | - | - | - | - | - |
| 3.5573 | 159200 | 0.3655 | - | - | - | - | - |
| 3.5618 | 159400 | 0.3683 | - | - | - | - | - |
| 3.5662 | 159600 | 0.3745 | - | - | - | - | - |
| 3.5707 | 159800 | 0.3668 | - | - | - | - | - |
| 3.5752 | 160000 | 0.3723 | 0.5115 (+0.2019) | 0.4211 (-0.1194) | 0.3553 (+0.0303) | 0.4120 (-0.0887) | 0.3961 (-0.0593) |
| 3.5796 | 160200 | 0.3671 | - | - | - | - | - |
| 3.5841 | 160400 | 0.3743 | - | - | - | - | - |
| 3.5886 | 160600 | 0.3683 | - | - | - | - | - |
| 3.5931 | 160800 | 0.3721 | - | - | - | - | - |
| 3.5975 | 161000 | 0.3749 | - | - | - | - | - |
| 3.6020 | 161200 | 0.3739 | - | - | - | - | - |
| 3.6065 | 161400 | 0.3755 | - | - | - | - | - |
| 3.6109 | 161600 | 0.3674 | - | - | - | - | - |
| 3.6154 | 161800 | 0.3715 | - | - | - | - | - |
| 3.6199 | 162000 | 0.3838 | - | - | - | - | - |
| 3.6243 | 162200 | 0.3711 | - | - | - | - | - |
| 3.6288 | 162400 | 0.3698 | - | - | - | - | - |
| 3.6333 | 162600 | 0.3765 | - | - | - | - | - |
| 3.6377 | 162800 | 0.3661 | - | - | - | - | - |
| 3.6422 | 163000 | 0.3747 | - | - | - | - | - |
| 3.6467 | 163200 | 0.3692 | - | - | - | - | - |
| 3.6512 | 163400 | 0.3697 | - | - | - | - | - |
| 3.6556 | 163600 | 0.3752 | - | - | - | - | - |
| 3.6601 | 163800 | 0.3641 | - | - | - | - | - |
| 3.6646 | 164000 | 0.3604 | - | - | - | - | - |
| 3.6690 | 164200 | 0.3726 | - | - | - | - | - |
| 3.6735 | 164400 | 0.3689 | - | - | - | - | - |
| 3.6780 | 164600 | 0.3707 | - | - | - | - | - |
| 3.6824 | 164800 | 0.3719 | - | - | - | - | - |
| 3.6869 | 165000 | 0.3665 | - | - | - | - | - |
| 3.6914 | 165200 | 0.3799 | - | - | - | - | - |
| 3.6958 | 165400 | 0.3694 | - | - | - | - | - |
| 3.7003 | 165600 | 0.3587 | - | - | - | - | - |
| 3.7048 | 165800 | 0.3719 | - | - | - | - | - |
| 3.7092 | 166000 | 0.3718 | - | - | - | - | - |
| 3.7137 | 166200 | 0.366 | - | - | - | - | - |
| 3.7182 | 166400 | 0.3665 | - | - | - | - | - |
| 3.7227 | 166600 | 0.3728 | - | - | - | - | - |
| 3.7271 | 166800 | 0.3636 | - | - | - | - | - |
| 3.7316 | 167000 | 0.3658 | - | - | - | - | - |
| 3.7361 | 167200 | 0.3701 | - | - | - | - | - |
| 3.7405 | 167400 | 0.3664 | - | - | - | - | - |
| 3.7450 | 167600 | 0.372 | - | - | - | - | - |
| 3.7495 | 167800 | 0.3691 | - | - | - | - | - |
| 3.7539 | 168000 | 0.3677 | - | - | - | - | - |
| 3.7584 | 168200 | 0.3689 | - | - | - | - | - |
| 3.7629 | 168400 | 0.3691 | - | - | - | - | - |
| 3.7673 | 168600 | 0.3744 | - | - | - | - | - |
| 3.7718 | 168800 | 0.3798 | - | - | - | - | - |
| 3.7763 | 169000 | 0.3713 | - | - | - | - | - |
| 3.7808 | 169200 | 0.3785 | - | - | - | - | - |
| 3.7852 | 169400 | 0.3728 | - | - | - | - | - |
| 3.7897 | 169600 | 0.3663 | - | - | - | - | - |
| 3.7942 | 169800 | 0.3724 | - | - | - | - | - |
| 3.7986 | 170000 | 0.3641 | - | - | - | - | - |
| 3.8031 | 170200 | 0.3674 | - | - | - | - | - |
| 3.8076 | 170400 | 0.3688 | - | - | - | - | - |
| 3.8120 | 170600 | 0.3724 | - | - | - | - | - |
| 3.8165 | 170800 | 0.3667 | - | - | - | - | - |
| 3.8210 | 171000 | 0.3707 | - | - | - | - | - |
| 3.8254 | 171200 | 0.3576 | - | - | - | - | - |
| 3.8299 | 171400 | 0.3653 | - | - | - | - | - |
| 3.8344 | 171600 | 0.3714 | - | - | - | - | - |
| 3.8388 | 171800 | 0.3741 | - | - | - | - | - |
| 3.8433 | 172000 | 0.3639 | - | - | - | - | - |
| 3.8478 | 172200 | 0.3679 | - | - | - | - | - |
| 3.8523 | 172400 | 0.3661 | - | - | - | - | - |
| 3.8567 | 172600 | 0.3682 | - | - | - | - | - |
| 3.8612 | 172800 | 0.3719 | - | - | - | - | - |
| 3.8657 | 173000 | 0.3749 | - | - | - | - | - |
| 3.8701 | 173200 | 0.3688 | - | - | - | - | - |
| 3.8746 | 173400 | 0.3648 | - | - | - | - | - |
| 3.8791 | 173600 | 0.3631 | - | - | - | - | - |
| 3.8835 | 173800 | 0.3649 | - | - | - | - | - |
| 3.8880 | 174000 | 0.3709 | - | - | - | - | - |
| 3.8925 | 174200 | 0.3658 | - | - | - | - | - |
| 3.8969 | 174400 | 0.374 | - | - | - | - | - |
| 3.9014 | 174600 | 0.3655 | - | - | - | - | - |
| 3.9059 | 174800 | 0.3715 | - | - | - | - | - |
| 3.9104 | 175000 | 0.3636 | - | - | - | - | - |
| 3.9148 | 175200 | 0.3637 | - | - | - | - | - |
| 3.9193 | 175400 | 0.3704 | - | - | - | - | - |
| 3.9238 | 175600 | 0.3582 | - | - | - | - | - |
| 3.9282 | 175800 | 0.3737 | - | - | - | - | - |
| 3.9327 | 176000 | 0.3608 | - | - | - | - | - |
| 3.9372 | 176200 | 0.3628 | - | - | - | - | - |
| 3.9416 | 176400 | 0.3744 | - | - | - | - | - |
| 3.9461 | 176600 | 0.3634 | - | - | - | - | - |
| 3.9506 | 176800 | 0.3656 | - | - | - | - | - |
| 3.9550 | 177000 | 0.3687 | - | - | - | - | - |
| 3.9595 | 177200 | 0.3757 | - | - | - | - | - |
| 3.9640 | 177400 | 0.3694 | - | - | - | - | - |
| 3.9684 | 177600 | 0.3726 | - | - | - | - | - |
| 3.9729 | 177800 | 0.3644 | - | - | - | - | - |
| 3.9774 | 178000 | 0.3684 | - | - | - | - | - |
| 3.9819 | 178200 | 0.3736 | - | - | - | - | - |
| 3.9863 | 178400 | 0.3635 | - | - | - | - | - |
| 3.9908 | 178600 | 0.3678 | - | - | - | - | - |
| 3.9953 | 178800 | 0.3648 | - | - | - | - | - |
| 3.9997 | 179000 | 0.3732 | - | - | - | - | - |
| 4.0042 | 179200 | 0.3522 | - | - | - | - | - |
| 4.0087 | 179400 | 0.352 | - | - | - | - | - |
| 4.0131 | 179600 | 0.3481 | - | - | - | - | - |
| 4.0176 | 179800 | 0.3486 | - | - | - | - | - |
| 4.0221 | 180000 | 0.3514 | - | - | - | - | - |
| 4.0265 | 180200 | 0.3492 | - | - | - | - | - |
| 4.0310 | 180400 | 0.3549 | - | - | - | - | - |
| 4.0355 | 180600 | 0.3549 | - | - | - | - | - |
| 4.0400 | 180800 | 0.356 | - | - | - | - | - |
| 4.0444 | 181000 | 0.3482 | - | - | - | - | - |
| 4.0489 | 181200 | 0.3538 | - | - | - | - | - |
| 4.0534 | 181400 | 0.3538 | - | - | - | - | - |
| 4.0578 | 181600 | 0.3617 | - | - | - | - | - |
| 4.0623 | 181800 | 0.3653 | - | - | - | - | - |
| 4.0668 | 182000 | 0.3512 | - | - | - | - | - |
| 4.0712 | 182200 | 0.3545 | - | - | - | - | - |
| 4.0757 | 182400 | 0.3447 | - | - | - | - | - |
| 4.0802 | 182600 | 0.3564 | - | - | - | - | - |
| 4.0846 | 182800 | 0.3573 | - | - | - | - | - |
| 4.0891 | 183000 | 0.3527 | - | - | - | - | - |
| 4.0936 | 183200 | 0.3442 | - | - | - | - | - |
| 4.0980 | 183400 | 0.3523 | - | - | - | - | - |
| 4.1025 | 183600 | 0.3587 | - | - | - | - | - |
| 4.1070 | 183800 | 0.3572 | - | - | - | - | - |
| 4.1115 | 184000 | 0.3565 | - | - | - | - | - |
| 4.1159 | 184200 | 0.3565 | - | - | - | - | - |
| 4.1204 | 184400 | 0.3525 | - | - | - | - | - |
| 4.1249 | 184600 | 0.3486 | - | - | - | - | - |
| 4.1293 | 184800 | 0.3534 | - | - | - | - | - |
| 4.1338 | 185000 | 0.3555 | - | - | - | - | - |
| 4.1383 | 185200 | 0.3606 | - | - | - | - | - |
| 4.1427 | 185400 | 0.3599 | - | - | - | - | - |
| 4.1472 | 185600 | 0.3501 | - | - | - | - | - |
| 4.1517 | 185800 | 0.3514 | - | - | - | - | - |
| 4.1561 | 186000 | 0.3516 | - | - | - | - | - |
| 4.1606 | 186200 | 0.3556 | - | - | - | - | - |
| 4.1651 | 186400 | 0.3451 | - | - | - | - | - |
| 4.1696 | 186600 | 0.3513 | - | - | - | - | - |
| 4.1740 | 186800 | 0.3536 | - | - | - | - | - |
| 4.1785 | 187000 | 0.3538 | - | - | - | - | - |
| 4.1830 | 187200 | 0.3552 | - | - | - | - | - |
| 4.1874 | 187400 | 0.3554 | - | - | - | - | - |
| 4.1919 | 187600 | 0.3541 | - | - | - | - | - |
| 4.1964 | 187800 | 0.3524 | - | - | - | - | - |
| 4.2008 | 188000 | 0.3641 | - | - | - | - | - |
| 4.2053 | 188200 | 0.3487 | - | - | - | - | - |
| 4.2098 | 188400 | 0.3483 | - | - | - | - | - |
| 4.2142 | 188600 | 0.3575 | - | - | - | - | - |
| 4.2187 | 188800 | 0.3542 | - | - | - | - | - |
| 4.2232 | 189000 | 0.3551 | - | - | - | - | - |
| 4.2276 | 189200 | 0.3479 | - | - | - | - | - |
| 4.2321 | 189400 | 0.3489 | - | - | - | - | - |
| 4.2366 | 189600 | 0.3484 | - | - | - | - | - |
| 4.2411 | 189800 | 0.3555 | - | - | - | - | - |
| 4.2455 | 190000 | 0.3548 | - | - | - | - | - |
| 4.2500 | 190200 | 0.3634 | - | - | - | - | - |
| 4.2545 | 190400 | 0.3561 | - | - | - | - | - |
| 4.2589 | 190600 | 0.3562 | - | - | - | - | - |
| 4.2634 | 190800 | 0.3554 | - | - | - | - | - |
| 4.2679 | 191000 | 0.3558 | - | - | - | - | - |
| 4.2723 | 191200 | 0.3525 | - | - | - | - | - |
| 4.2768 | 191400 | 0.3499 | - | - | - | - | - |
| 4.2813 | 191600 | 0.3504 | - | - | - | - | - |
| 4.2857 | 191800 | 0.3525 | - | - | - | - | - |
| 4.2902 | 192000 | 0.3506 | - | - | - | - | - |
| 4.2947 | 192200 | 0.3493 | - | - | - | - | - |
| 4.2992 | 192400 | 0.3437 | - | - | - | - | - |
| 4.3036 | 192600 | 0.3516 | - | - | - | - | - |
| 4.3081 | 192800 | 0.3581 | - | - | - | - | - |
| 4.3126 | 193000 | 0.3561 | - | - | - | - | - |
| 4.3170 | 193200 | 0.3453 | - | - | - | - | - |
| 4.3215 | 193400 | 0.3468 | - | - | - | - | - |
| 4.3260 | 193600 | 0.351 | - | - | - | - | - |
| 4.3304 | 193800 | 0.3589 | - | - | - | - | - |
| 4.3349 | 194000 | 0.3504 | - | - | - | - | - |
| 4.3394 | 194200 | 0.3507 | - | - | - | - | - |
| 4.3438 | 194400 | 0.355 | - | - | - | - | - |
| 4.3483 | 194600 | 0.3534 | - | - | - | - | - |
| 4.3528 | 194800 | 0.3536 | - | - | - | - | - |
| 4.3572 | 195000 | 0.3554 | - | - | - | - | - |
| 4.3617 | 195200 | 0.3521 | - | - | - | - | - |
| 4.3662 | 195400 | 0.3469 | - | - | - | - | - |
| 4.3707 | 195600 | 0.357 | - | - | - | - | - |
| 4.3751 | 195800 | 0.3523 | - | - | - | - | - |
| 4.3796 | 196000 | 0.3528 | - | - | - | - | - |
| 4.3841 | 196200 | 0.3552 | - | - | - | - | - |
| 4.3885 | 196400 | 0.3543 | - | - | - | - | - |
| 4.3930 | 196600 | 0.3546 | - | - | - | - | - |
| 4.3975 | 196800 | 0.3483 | - | - | - | - | - |
| 4.4019 | 197000 | 0.3434 | - | - | - | - | - |
| 4.4064 | 197200 | 0.3536 | - | - | - | - | - |
| 4.4109 | 197400 | 0.3503 | - | - | - | - | - |
| 4.4153 | 197600 | 0.3512 | - | - | - | - | - |
| 4.4198 | 197800 | 0.3557 | - | - | - | - | - |
| 4.4243 | 198000 | 0.3665 | - | - | - | - | - |
| 4.4288 | 198200 | 0.3468 | - | - | - | - | - |
| 4.4332 | 198400 | 0.3546 | - | - | - | - | - |
| 4.4377 | 198600 | 0.358 | - | - | - | - | - |
| 4.4422 | 198800 | 0.3542 | - | - | - | - | - |
| 4.4466 | 199000 | 0.351 | - | - | - | - | - |
| 4.4511 | 199200 | 0.3522 | - | - | - | - | - |
| 4.4556 | 199400 | 0.3535 | - | - | - | - | - |
| 4.4600 | 199600 | 0.3577 | - | - | - | - | - |
| 4.4645 | 199800 | 0.3536 | - | - | - | - | - |
| 4.4690 | 200000 | 0.3502 | - | - | - | - | - |
| 4.4734 | 200200 | 0.3543 | - | - | - | - | - |
| 4.4779 | 200400 | 0.3537 | - | - | - | - | - |
| 4.4824 | 200600 | 0.3547 | - | - | - | - | - |
| 4.4869 | 200800 | 0.3527 | - | - | - | - | - |
| 4.4913 | 201000 | 0.3467 | - | - | - | - | - |
| 4.4958 | 201200 | 0.3566 | - | - | - | - | - |
| 4.5003 | 201400 | 0.3444 | - | - | - | - | - |
| 4.5047 | 201600 | 0.3596 | - | - | - | - | - |
| 4.5092 | 201800 | 0.3602 | - | - | - | - | - |
| 4.5137 | 202000 | 0.3489 | - | - | - | - | - |
| 4.5181 | 202200 | 0.3532 | - | - | - | - | - |
| 4.5226 | 202400 | 0.3489 | - | - | - | - | - |
| 4.5271 | 202600 | 0.354 | - | - | - | - | - |
| 4.5315 | 202800 | 0.3531 | - | - | - | - | - |
| 4.5360 | 203000 | 0.3559 | - | - | - | - | - |
| 4.5405 | 203200 | 0.3583 | - | - | - | - | - |
| 4.5449 | 203400 | 0.3535 | - | - | - | - | - |
| 4.5494 | 203600 | 0.3539 | - | - | - | - | - |
| 4.5539 | 203800 | 0.352 | - | - | - | - | - |
| 4.5584 | 204000 | 0.3545 | - | - | - | - | - |
| 4.5628 | 204200 | 0.3536 | - | - | - | - | - |
| 4.5673 | 204400 | 0.3547 | - | - | - | - | - |
| 4.5718 | 204600 | 0.3436 | - | - | - | - | - |
| 4.5762 | 204800 | 0.3469 | - | - | - | - | - |
| 4.5807 | 205000 | 0.3545 | - | - | - | - | - |
| 4.5852 | 205200 | 0.3603 | - | - | - | - | - |
| 4.5896 | 205400 | 0.3489 | - | - | - | - | - |
| 4.5941 | 205600 | 0.3592 | - | - | - | - | - |
| 4.5986 | 205800 | 0.3538 | - | - | - | - | - |
| 4.6030 | 206000 | 0.3536 | - | - | - | - | - |
| 4.6075 | 206200 | 0.3643 | - | - | - | - | - |
| 4.6120 | 206400 | 0.3561 | - | - | - | - | - |
| 4.6165 | 206600 | 0.3492 | - | - | - | - | - |
| 4.6209 | 206800 | 0.3494 | - | - | - | - | - |
| 4.6254 | 207000 | 0.3537 | - | - | - | - | - |
| 4.6299 | 207200 | 0.3516 | - | - | - | - | - |
| 4.6343 | 207400 | 0.3615 | - | - | - | - | - |
| 4.6388 | 207600 | 0.3556 | - | - | - | - | - |
| 4.6433 | 207800 | 0.3516 | - | - | - | - | - |
| 4.6477 | 208000 | 0.3534 | - | - | - | - | - |
| 4.6522 | 208200 | 0.3571 | - | - | - | - | - |
| 4.6567 | 208400 | 0.3432 | - | - | - | - | - |
| 4.6611 | 208600 | 0.3583 | - | - | - | - | - |
| 4.6656 | 208800 | 0.3488 | - | - | - | - | - |
| 4.6701 | 209000 | 0.349 | - | - | - | - | - |
| 4.6745 | 209200 | 0.3521 | - | - | - | - | - |
| 4.6790 | 209400 | 0.358 | - | - | - | - | - |
| 4.6835 | 209600 | 0.3512 | - | - | - | - | - |
| 4.6880 | 209800 | 0.3498 | - | - | - | - | - |
| 4.6924 | 210000 | 0.3519 | - | - | - | - | - |
| 4.6969 | 210200 | 0.3506 | - | - | - | - | - |
| 4.7014 | 210400 | 0.3553 | - | - | - | - | - |
| 4.7058 | 210600 | 0.3468 | - | - | - | - | - |
| 4.7103 | 210800 | 0.3512 | - | - | - | - | - |
| 4.7148 | 211000 | 0.3454 | - | - | - | - | - |
| 4.7192 | 211200 | 0.3501 | - | - | - | - | - |
| 4.7237 | 211400 | 0.3583 | - | - | - | - | - |
| 4.7282 | 211600 | 0.3582 | - | - | - | - | - |
| 4.7326 | 211800 | 0.3564 | - | - | - | - | - |
| 4.7371 | 212000 | 0.3515 | - | - | - | - | - |
| 4.7416 | 212200 | 0.3514 | - | - | - | - | - |
| 4.7461 | 212400 | 0.351 | - | - | - | - | - |
| 4.7505 | 212600 | 0.3523 | - | - | - | - | - |
| 4.7550 | 212800 | 0.3495 | - | - | - | - | - |
| 4.7595 | 213000 | 0.3502 | - | - | - | - | - |
| 4.7639 | 213200 | 0.3464 | - | - | - | - | - |
| 4.7684 | 213400 | 0.3543 | - | - | - | - | - |
| 4.7729 | 213600 | 0.3594 | - | - | - | - | - |
| 4.7773 | 213800 | 0.3518 | - | - | - | - | - |
| 4.7818 | 214000 | 0.3501 | - | - | - | - | - |
| 4.7863 | 214200 | 0.3485 | - | - | - | - | - |
| 4.7907 | 214400 | 0.351 | - | - | - | - | - |
| 4.7952 | 214600 | 0.3523 | - | - | - | - | - |
| 4.7997 | 214800 | 0.3546 | - | - | - | - | - |
| 4.8041 | 215000 | 0.3515 | - | - | - | - | - |
| 4.8086 | 215200 | 0.3505 | - | - | - | - | - |
| 4.8131 | 215400 | 0.354 | - | - | - | - | - |
| 4.8176 | 215600 | 0.3482 | - | - | - | - | - |
| 4.8220 | 215800 | 0.3527 | - | - | - | - | - |
| 4.8265 | 216000 | 0.3515 | - | - | - | - | - |
| 4.8310 | 216200 | 0.3547 | - | - | - | - | - |
| 4.8354 | 216400 | 0.3538 | - | - | - | - | - |
| 4.8399 | 216600 | 0.3525 | - | - | - | - | - |
| 4.8444 | 216800 | 0.3506 | - | - | - | - | - |
| 4.8488 | 217000 | 0.3488 | - | - | - | - | - |
| 4.8533 | 217200 | 0.3526 | - | - | - | - | - |
| 4.8578 | 217400 | 0.3461 | - | - | - | - | - |
| 4.8622 | 217600 | 0.3558 | - | - | - | - | - |
| 4.8667 | 217800 | 0.3528 | - | - | - | - | - |
| 4.8712 | 218000 | 0.3482 | - | - | - | - | - |
| 4.8757 | 218200 | 0.3574 | - | - | - | - | - |
| 4.8801 | 218400 | 0.344 | - | - | - | - | - |
| 4.8846 | 218600 | 0.3509 | - | - | - | - | - |
| 4.8891 | 218800 | 0.3415 | - | - | - | - | - |
| 4.8935 | 219000 | 0.3419 | - | - | - | - | - |
| 4.8980 | 219200 | 0.3549 | - | - | - | - | - |
| 4.9025 | 219400 | 0.3413 | - | - | - | - | - |
| 4.9069 | 219600 | 0.3538 | - | - | - | - | - |
| 4.9114 | 219800 | 0.3476 | - | - | - | - | - |
| 4.9159 | 220000 | 0.3464 | - | - | - | - | - |
| 4.9203 | 220200 | 0.3445 | - | - | - | - | - |
| 4.9248 | 220400 | 0.3519 | - | - | - | - | - |
| 4.9293 | 220600 | 0.3529 | - | - | - | - | - |
| 4.9337 | 220800 | 0.3399 | - | - | - | - | - |
| 4.9382 | 221000 | 0.3463 | - | - | - | - | - |
| 4.9427 | 221200 | 0.3489 | - | - | - | - | - |
| 4.9472 | 221400 | 0.3437 | - | - | - | - | - |
| 4.9516 | 221600 | 0.3474 | - | - | - | - | - |
| 4.9561 | 221800 | 0.3593 | - | - | - | - | - |
| 4.9606 | 222000 | 0.3476 | - | - | - | - | - |
| 4.9650 | 222200 | 0.3466 | - | - | - | - | - |
| 4.9695 | 222400 | 0.3551 | - | - | - | - | - |
| 4.9740 | 222600 | 0.3498 | - | - | - | - | - |
| 4.9784 | 222800 | 0.3534 | - | - | - | - | - |
| 4.9829 | 223000 | 0.3404 | - | - | - | - | - |
| 4.9874 | 223200 | 0.3482 | - | - | - | - | - |
| 4.9918 | 223400 | 0.3464 | - | - | - | - | - |
| 4.9963 | 223600 | 0.3561 | - | - | - | - | - |
| -1 | -1 | - | 0.5149 (+0.2052) | 0.4090 (-0.1314) | 0.3596 (+0.0346) | 0.4065 (-0.0942) | 0.3917 (-0.0637) |
</details>
### Framework Versions
- Python: 3.11.0
- Sentence Transformers: 4.0.1
- Transformers: 4.50.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MinaMila/llama_instbase_unlearned_GermanCredit_4ep_22 | MinaMila | 2025-03-31T15:54:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T15:50:56Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TrishanuDas/sample_model_2 | TrishanuDas | 2025-03-31T15:52:42Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T15:52:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NoamDiamant52/model_gpt2-xl_mlp_out_lr5e5_steps45k_alpha0.01 | NoamDiamant52 | 2025-03-31T15:50:08Z | 0 | 0 | saelens | [
"saelens",
"region:us"
]
| null | 2025-03-31T15:49:52Z | ---
library_name: saelens
---
# SAEs for use with the SAELens library
This repository contains the following SAEs:
- layer_13_hook_mlp_out_out
Load these SAEs using SAELens as below:
```python
from sae_lens import SAE
sae, cfg_dict, sparsity = SAE.from_pretrained("NoamDiamant52/model_gpt2-xl_mlp_out_lr5e5_steps45k_alpha0.01", "<sae_id>")
``` |
LandCruiser/sn29_omg_4 | LandCruiser | 2025-03-31T15:48:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T15:12:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
silviasapora/gemma-7b-sft-silvia_simpo-basic-5e-7-005-v141 | silviasapora | 2025-03-31T15:47:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T15:11:36Z | ---
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-sft-silvia_simpo-basic-5e-7-005-v141", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/xf3dlli2)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MinaMila/llama_instbase_unlearned_GermanCredit_2ep_22 | MinaMila | 2025-03-31T15:46:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:MinaMila/llama3_unlearning_general_methode",
"base_model:finetune:MinaMila/llama3_unlearning_general_methode",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T15:43:19Z | ---
base_model: MinaMila/llama3_unlearning_general_methode
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** MinaMila/llama3_unlearning_general_methode
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jonjew/AlinaLi | Jonjew | 2025-03-31T15:45:50Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T15:45:43Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: alinali
output:
url: images/Alina Li_00073_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alinali
license: unknown
---
# Alina Li
<Gallery />
## Model description
FROM https://civitai.com/models/829081/alina-li-flux-adult-film-actress?modelVersionId=927249
Trigger alinali
## Trigger words
You should use `alinali` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/AlinaLi/tree/main) them in the Files & versions tab.
|
RaZiX/xlm-roberta-csfd-20 | RaZiX | 2025-03-31T15:44:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-03-31T15:41:05Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm_roberta_top20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-csfd-20
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1968
- Accuracy: 0.9607
- F1: 0.9610
- Precision: 0.9627
- Recall: 0.9607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.8509 | 1.0 | 584 | 0.6074 | 0.8533 | 0.8547 | 0.8792 | 0.8533 |
| 0.5597 | 2.0 | 1168 | 0.3286 | 0.9167 | 0.9176 | 0.9303 | 0.9167 |
| 0.2302 | 3.0 | 1752 | 0.2387 | 0.9413 | 0.9422 | 0.9491 | 0.9413 |
| 0.1052 | 4.0 | 2336 | 0.2314 | 0.9487 | 0.9494 | 0.9528 | 0.9487 |
| 0.0662 | 5.0 | 2920 | 0.1968 | 0.9607 | 0.9610 | 0.9627 | 0.9607 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.1
|
Devi-Ayyagari/yolov7_OzFish | Devi-Ayyagari | 2025-03-31T15:44:03Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-03-31T15:41:14Z | ## Dataset: OzFish ##
OzFish is a collection of ~80k fish crops, ~45k bounding box annotations derived from Baited Remote Underwater Video Stations (BRUVS) and comprised of 70 families, 200 genera and 507 species of fish. This dataset is completely open and free to use for advancing machine learning for the classification of fish from underwater imagery. This dataset has been developed as part of the Australian Research Data Commons Data Discoveries program with the aim to further advance research into machine learning for the automated detection of fish from video. There are also 16 species with more than 1000 detections and 620 different species with 80983 detections from 64385 images from 1013 videos. The images also have bounding boxes with JSON metadata.
Data Access link: https://apps.aims.gov.au/metadata/view/38c829d4-6b6d-44a1-9476-f9b0955ce0b8
## Model: YOLOv7 ##
The model was trained using the default hyperparameters of YOLOv7 model. No pre-processing was done before training. The model was trained for 500 epochs and the model with the best validation mAP is uploaded to the repo. |
i-LUDUS/DeepSeek-R1-Fine-tuned-Medical | i-LUDUS | 2025-03-31T15:43:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T15:33:37Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** i-LUDUS
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Joerr/mistral-7b-v3_demo | Joerr | 2025-03-31T15:43:00Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T15:20:09Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Joerr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Hammer-1.5b-GGUF | mradermacher | 2025-03-31T15:42:59Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Salesforce/xlam-function-calling-60k",
"dataset:MadeAgents/xlam-irrelevance-7.5k",
"base_model:MadeAgents/Hammer-1.5b",
"base_model:quantized:MadeAgents/Hammer-1.5b",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T15:36:57Z | ---
base_model: MadeAgents/Hammer-1.5b
datasets:
- Salesforce/xlam-function-calling-60k
- MadeAgents/xlam-irrelevance-7.5k
language:
- en
library_name: transformers
license: cc-by-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MadeAgents/Hammer-1.5b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hammer-1.5b-GGUF/resolve/main/Hammer-1.5b.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RowekBrah/finetune_colpali-v1_3-4bit_v2 | RowekBrah | 2025-03-31T15:42:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"colpali",
"generated_from_trainer",
"base_model:vidore/colpaligemma-3b-pt-448-base",
"base_model:finetune:vidore/colpaligemma-3b-pt-448-base",
"license:gemma",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T15:42:02Z | ---
library_name: transformers
license: gemma
base_model: vidore/colpaligemma-3b-pt-448-base
tags:
- colpali
- generated_from_trainer
model-index:
- name: finetune_colpali-v1_3-4bit_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_colpali-v1_3-4bit_v2
This model is a fine-tuned version of [vidore/colpaligemma-3b-pt-448-base](https://huggingface.co/vidore/colpaligemma-3b-pt-448-base) on the RowekBrah/ColPali_ann_rep_v2_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0670
- Model Preparation Time: 0.0056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1.5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |
|:-------------:|:------:|:----:|:---------------:|:----------------------:|
| No log | 0.0012 | 1 | 0.2103 | 0.0056 |
| 0.4896 | 0.1238 | 100 | 0.1170 | 0.0056 |
| 0.5575 | 0.2476 | 200 | 0.0940 | 0.0056 |
| 0.3973 | 0.3714 | 300 | 0.0920 | 0.0056 |
| 0.4478 | 0.4952 | 400 | 0.0836 | 0.0056 |
| 0.2364 | 0.6190 | 500 | 0.0808 | 0.0056 |
| 0.2158 | 0.7428 | 600 | 0.0742 | 0.0056 |
| 0.339 | 0.8666 | 700 | 0.0700 | 0.0056 |
| 0.2052 | 0.9904 | 800 | 0.0704 | 0.0056 |
| 0.1546 | 1.1151 | 900 | 0.0672 | 0.0056 |
| 0.2003 | 1.2389 | 1000 | 0.0672 | 0.0056 |
| 0.1242 | 1.3627 | 1100 | 0.0676 | 0.0056 |
| 0.283 | 1.4865 | 1200 | 0.0674 | 0.0056 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
RaZiX/xlm-roberta-csfd-10 | RaZiX | 2025-03-31T15:40:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-03-31T15:25:10Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm_roberta_top10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-csfd-10
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1591
- Accuracy: 0.9613
- F1: 0.9617
- Precision: 0.9630
- Recall: 0.9613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 292 | 0.5781 | 0.8427 | 0.8432 | 0.8710 | 0.8427 |
| 1.0772 | 2.0 | 584 | 0.2642 | 0.9213 | 0.9213 | 0.9327 | 0.9213 |
| 1.0772 | 3.0 | 876 | 0.2215 | 0.9413 | 0.9408 | 0.9484 | 0.9413 |
| 0.1222 | 4.0 | 1168 | 0.1546 | 0.96 | 0.9604 | 0.9618 | 0.9600 |
| 0.1222 | 5.0 | 1460 | 0.1591 | 0.9613 | 0.9617 | 0.9630 | 0.9613 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.1
|
TrishanuDas/sample_model | TrishanuDas | 2025-03-31T15:37:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T15:35:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bowilleatyou/a9290eb1-765d-406b-acbe-cd6ba73ce94d | bowilleatyou | 2025-03-31T15:36:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:35:50Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bowilleatyou/4e64ae75-c9b2-48b5-aa0e-a88d56c5854a | bowilleatyou | 2025-03-31T15:36:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T11:35:26Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/tokyotech-llm_-_Llama-3.1-Swallow-8B-Instruct-v0.1-awq | RichardErkhov | 2025-03-31T15:36:21Z | 0 | 0 | null | [
"safetensors",
"llama",
"arxiv:2406.08464",
"arxiv:2407.21783",
"4-bit",
"awq",
"region:us"
]
| null | 2025-03-31T15:32:09Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.1-Swallow-8B-Instruct-v0.1 - AWQ
- Model creator: https://huggingface.co/tokyotech-llm/
- Original model: https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1/
Original model description:
---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license:
- llama3.1
- gemma
model_type: llama
datasets:
- lmsys/lmsys-chat-1m
- tokyotech-llm/lmsys-chat-1m-synth
- argilla/magpie-ultra-v0.1
- tokyotech-llm/swallow-magpie-ultra-v0.1
- tokyotech-llm/swallow-gemma-magpie-v0.1
---
# Llama 3.1 Swallow - Built with Llama
Llama 3.1 Swallow is a series of large language models (8B, 70B) that were built by continual pre-training on the [Meta Llama 3.1](https://huggingface.co/collections/meta-llama/llama-31-669fc079a0c406a149a5738f) models.
Llama 3.1 Swallow enhanced the Japanese language capabilities of the original Llama 3.1 while retaining the English language capabilities.
We use approximately 200 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and
coding contents, etc (see the Training Datasets section of the base model) for continual pre-training.
The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.
See the Swallow Model Index section to find other model variants.
# Release History
- **October 08, 2024**: Released [Llama-3.1-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1), [Llama-3.1-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1), [Llama-3.1-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1), and [Llama-3.1-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1).
## Swallow Model Index
|Model|Llama-3.1-Swallow v0.1|Llama-3.1-Swallow-Instruct v0.1|Llama-3.1-Swallow v0.2|Llama-3.1-Swallow-Instruct v0.2|Llama-3.1-Swallow-Instruct v0.3|
|---|---|---|---|---|---|
|8B| [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.2) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3)
|70B| [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1) | | | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3) |

The website [https://swallow-llm.github.io/](https://swallow-llm.github.io/) provides large language models developed by the Swallow team.
## Model Details
* **Model type**: Please refer to [Llama 3.1 MODEL_CARD](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
* **Tokenizer**: Please refer to [Llama 3.1 blog](https://ai.meta.com/blog/meta-llama-3-1) for details on the tokenizer.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Model Performance
### Japanese tasks
|Model|JCom.|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|JMMLU|JHumanEval|Ja Avg|
|---|---|---|---|---|---|---|---|---|---|---|---|
| |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|5-shot|0-shot| |
| |EM acc|Char-F1|Char-F1|Char-F1|ROUGE-2|EM acc|BLEU|BLEU|EM acc|pass@1| |
| RakutenAI-7B-chat | 0.9035 | 0.2600 | 0.4619 | 0.8647 | 0.1339 | 0.2120 | 0.2667 | 0.1966 | 0.4504 | 0.2299 | 0.3980 |
| Qwen2-7B-Instruct | 0.8856 | 0.3902 | 0.3859 | 0.8967 | 0.1277 | 0.5720 | 0.2041 | 0.1909 | 0.5713 | **0.5683** | 0.4793 |
| Qwen2.5-7B-Instruct | 0.9151 | 0.4293 | 0.3910 | 0.8908 | 0.1676 | **0.6240** | 0.2108 | 0.1916 | **0.6252** | 0.5305 | 0.4976 |
| Tanuki-8B-dpo-v1.0 | 0.2770 | 0.2937 | 0.3710 | 0.6669 | 0.1016 | 0.4280 | 0.2385 | 0.1820 | 0.3078 | 0.2555 | 0.3122 |
| Llama 3 8B Instruct | 0.8785 | 0.3812 | 0.3936 | 0.8955 | 0.1273 | 0.4160 | 0.2143 | 0.2035 | 0.4719 | 0.2872 | 0.4269 |
| Llama 3.1 8B Instruct | 0.8829 | 0.4272 | 0.4112 | 0.8856 | 0.1481 | 0.5280 | 0.2174 | 0.1990 | 0.5086 | 0.4976 | 0.4706 |
| Llama 3 Youko 8B Instruct | 0.9196 | 0.4850 | 0.5178 | 0.9001 | 0.2085 | 0.4680 | 0.2559 | 0.1906 | 0.4691 | 0.2695 | 0.4684 |
| Llama-3-ELYZA-JP-8B | 0.9017 | 0.5124 | 0.5016 | 0.9113 | 0.1677 | 0.4600 | 0.2509 | 0.1846 | 0.4829 | 0.3811 | 0.4754 |
| Llama 3 heron brain 8B v0.3 | 0.9231 | 0.4933 | 0.5694 | 0.9056 | **0.2178** | 0.4560 | 0.2771 | 0.2168 | 0.4993 | 0.3177 | 0.4876 |
| Llama 3 Swallow 8B Instruct | 0.9178 | 0.4963 | 0.5168 | 0.9088 | 0.1296 | 0.4880 | 0.2522 | 0.2254 | 0.4835 | 0.3927 | 0.4811 |
| Llama 3.1 Swallow 8B Instruct | **0.9240** | **0.5874** | **0.5736** | **0.9170** | 0.1380 | 0.5080 | **0.2820** | **0.2282** | 0.5301 | 0.3665 | **0.5055** |
### English tasks
|Model|OpenBookQA|TriviaQA|HellaSWAG|SQuAD2.0|XWINO|MMLU|GSM8K|BBH|HumanEval|En Avg|
|---|---|---|---|---|---|---|---|---|---|---|
| |4-shot|4-shot|4-shot|4-shot|4-shot|5-shot|4-shot|3-shot|0-shot| |
| |Acc|EM acc|Acc|EM acc|Acc|Acc|EM acc|CoT EM Acc|pass@1| |
| RakutenAI-7B-chat | 0.4160 | 0.5971 | **0.6465** | 0.3091 | 0.8886 | 0.5757 | 0.3139 | 0.4958 | 0.2671 | 0.5011 |
| Qwen2-7B-Instruct | 0.4000 | 0.5468 | 0.6146 | 0.3518 | 0.8852 | 0.7073 | 0.6300 | 0.3101 | 0.6354 | 0.5646 |
| Qwen2.5-7B-Instruct | **0.4280** | 0.5187 | 0.6240 | 0.2626 | 0.8761 | **0.7419** | 0.7415 | 0.2150 | **0.6360** | 0.5604 |
| Tanuki-8B-dpo-v1.0 | 0.3340 | 0.2838 | 0.4696 | 0.2395 | 0.8168 | 0.3772 | 0.4867 | 0.3350 | 0.2805 | 0.4026 |
| Llama 3 8B Instruct | 0.3880 | 0.6687 | 0.5834 | 0.3743 | 0.8903 | 0.6567 | **0.7453** | 0.6478 | 0.5415 | 0.6107 |
| Llama 3.1 8B Instruct | 0.3700 | **0.6994** | 0.5920 | **0.3783** | **0.9037** | 0.6809 | 0.7430 | **0.6928** | 0.6293 | **0.6321** |
| Llama 3 Youko 8B Instruct | 0.4080 | 0.6129 | 0.5983 | 0.3370 | 0.8981 | 0.5964 | 0.5618 | 0.4012 | 0.2750 | 0.5209 |
| Llama-3-ELYZA-JP-8B | 0.3200 | 0.5502 | 0.5224 | 0.3631 | 0.8809 | 0.5875 | 0.5701 | 0.3213 | 0.4604 | 0.5084 |
| Llama 3 heron brain 8B v0.3 | 0.3580 | 0.6563 | 0.5686 | 0.3726 | 0.9002 | 0.6213 | 0.5777 | 0.6409 | 0.3720 | 0.5631 |
| Llama 3 Swallow 8B Instruct | 0.3720 | 0.6557 | 0.5861 | 0.3648 | 0.9002 | 0.6315 | 0.5959 | 0.6391 | 0.4238 | 0.5743 |
| Llama 3.1 Swallow 8B Instruct | 0.3900 | 0.6488 | 0.6151 | 0.3553 | 0.8912 | 0.6237 | 0.6050 | 0.6417 | 0.3787 | 0.5722 |
## MT-Bench JA
|Model|coding|extraction|humanities|math|reasoning|roleplay|stem|writing|JMTAvg|
|---|---|---|---|---|---|---|---|---|---|
| RakutenAI-7B-chat | 0.2475 | 0.3522 | 0.4692 | 0.2140 | 0.3926 | 0.4427 | 0.3977 | 0.4434 | 0.3699 |
| Qwen2-7B-Instruct | 0.4635 | 0.6909 | 0.6857 | **0.5970** | 0.5042 | 0.6667 | 0.5353 | 0.6808 | 0.6030 |
| Qwen2.5-7B-Instruct | **0.5111** | 0.7489 | 0.6913 | 0.5742 | 0.4851 | **0.6810** | 0.5350 | 0.6810 | **0.6134** |
| Tanuki-8B-dpo-v1.0 | 0.3019 | 0.4772 | 0.5658 | 0.4129 | 0.3590 | 0.5120 | 0.4770 | 0.6159 | 0.4652 |
| Llama 3 8B Instruct | 0.3744 | 0.6876 | 0.6225 | 0.2070 | 0.5032 | 0.5248 | 0.5326 | 0.4884 | 0.4926 |
| Llama 3.1 8B Instruct | 0.3234 | 0.7362 | 0.4973 | 0.4787 | 0.3210 | 0.4670 | 0.4656 | 0.4314 | 0.4651 |
| Llama 3 Youko 8B Instruct | 0.2950 | 0.7332 | **0.7125** | 0.2533 | 0.4987 | 0.6514 | **0.5438** | **0.7091** | 0.5496 |
| Llama-3-ELYZA-JP-8B | 0.2908 | 0.6421 | 0.6406 | 0.3088 | **0.5500** | 0.6740 | 0.5251 | 0.6744 | 0.5382 |
| Llama 3 heron brain 8B v0.3 | 0.2929 | 0.5635 | 0.6241 | 0.2135 | 0.4582 | 0.5354 | 0.5273 | 0.5099 | 0.4656 |
| Llama 3 Swallow 8B Instruct | 0.3547 | 0.6508 | 0.5371 | 0.2718 | 0.4007 | 0.5493 | 0.4752 | 0.5730 | 0.4766 |
| Llama 3.1 Swallow 8B Instruct | 0.3132 | **0.7734** | 0.6645 | 0.3880 | 0.5230 | 0.5711 | 0.4953 | 0.5330 | 0.5327 |
## Evaluation Benchmarks
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.3.0), JP Language Model Evaluation Harness(commit #9b42d41) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara et al., 2022])
- Open-ended question answering (JEMHopQA [Ishii et al., 2024])
- Open-ended question answering (NIILC [関根, 2003])
- Machine reading comprehension (JSQuAD [Kurihara et al., 2022])
- Automatic summarization (XL-Sum [Hasan et al., 2021])
- Machine translation (WMT2020 ja-en [Barrault et al., 2020])
- Machine translation (WMT2020 en-ja [Barrault et al., 2020])
- Mathematical reasoning (MGSM [Shi et al., 2023])
- Academic exams (JMMLU [尹ら, 2024])
- Code generation (JHumanEval [佐藤ら, 2024])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.4.2) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov et al., 2018])
- Open-ended question answering (TriviaQA [Joshi et al., 2017])
- Machine reading comprehension (SQuAD2 [Rajpurkar et al., 2018])
- Commonsense reasoning (XWINO [Tikhonov and Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers et al., 2019])
- Mathematical reasoning (GSM8K [Cobbe et al., 2021])
- Reasoning (BBH (BIG-Bench-Hard) [Suzgun et al., 2023])
- Academic exams (MMLU [Hendrycks et al., 2021])
- Code generation (HumanEval [Chen et al., 2021])
### MT-Bench JA
We used [Japanese MT-Bench](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question) to assess the capabilities of multi-turn dialogue with the following settings:
- Implementation: FastChat [Zheng+, 2023] (commit #e86e70d0)
- Question: [Nejumi LLM-Leaderboard NEO, mtbench_ja_question_v3](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question/v3)
- Reference Answer: [Nejumi LLM-Leaderboard NEO, mtbench_ja_referenceanswer_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_referenceanswer/v1)
- Prompt for Judge: [Nejumi LLM-Leaderboard NEO, mtbench_ja_prompt_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_prompt/v1)
- Judge: `gpt-4-1106-preview`
- Scoring: Absolute scale normalized to a 0-1 range, averaged over five runs.
## Usage
```sh
pip install vllm
```
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_name = "tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(
model=model_name,
tensor_parallel_size=1,
)
sampling_params = SamplingParams(
temperature=0.6, top_p=0.9, max_tokens=512, stop="<|eot_id|>"
)
message = [
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"},
{
"role": "user",
"content": "東京の紅葉した公園で、東京タワーと高層ビルを背景に、空を舞うツバメと草地に佇むラマが出会う温かな物語を書いてください。",
},
]
prompt = tokenizer.apply_chat_template(
message, tokenize=False, add_generation_prompt=True
)
output = llm.generate(prompt, sampling_params)
print(output[0].outputs[0].text)
```
## Training Datasets
### Instruction Tuning
The following datasets were used for the instruction tuning.
- Japanese
- [Llama-3.1-LMSYS-Chat-1M-Synth-Ja](https://huggingface.co/datasets/tokyotech-llm/lmsys-chat-1m-synth)
- Single-turn Japanese instruction dataset synthesized and derived from [lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) [\[Zhang+, ICLR24\]](https://openreview.net/forum?id=BOfDKxfwt0)). First-turn user instructions were translated into Japanese via DeepL (machine translation), and assistant responses were generated using [Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct). [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) served as a judge for rejection sampling (n=6).
Conversations containing personally identifiable information (PII) and template-based user instructions were removed. Duplicate instructions were removed.
- [Swallow-Magpie-Ultra-v0.1](https://huggingface.co/datasets/tokyotech-llm/swallow-magpie-ultra-v0.1)
- A Japanese variant of the `filtered-magpie-ultra-en` dataset, translated into Japanese by [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it).
- [Swallow-Gemma-Magpie-v0.1](https://huggingface.co/datasets/tokyotech-llm/swallow-gemma-magpie-v0.1)
- A Japanese synthetic instruction tuning dataset from scratch, generated by [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it). User instructions were created with prompts specific to each topic, and assistant responses were generated for these instructions. The conversations were then heuristically filtered for quality and length.
- English
- [Llama-3.1-LMSYS-Chat-1M-Synth-En](https://huggingface.co/datasets/tokyotech-llm/lmsys-chat-1m-synth)
- The creation process is similar to `Llama-3.1-LMSYS-Chat-1M-Synth-Ja`, but this version uses the original English user instructions. The assistant responses were generated in English as well. Rejection sampling was not applied for this version.
- `filtered-magpie-ultra-en`
- A subset of the [magpie-ultra](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) dataset, developed following the MAGPIE recipe [\[Xu+, arXiv24\]](https://arxiv.org/abs/2406.08464) using [Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct). This subset includes only samples rated as 'average,' 'good,' or 'excellent.'
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 3.1 under a generous open license.
We received various supports including:
+ AIST project: "Research and Development of Foundation Models for Generative AI in the Physical Domain"
+ NEDO project: "Development of Artificial Intelligence Application Technology to Support Judgment in Design Risk Assessment Work Based on the Perspective of Skilled Persons" (JPNP18002) of "Development of Integration Technology as the Core of Next Generation Artificial Intelligence and Robotics"
+ MEXT project: "Formation of R&D center to ensure transparency and reliability of generative AI models"
+ AIST program: [Large Generative AI Development Support Program](https://abci.ai/en/link/lfm_support_program.html)
## License
[META LLAMA 3.1 COMMUNITY LICENSE](https://www.llama.com/llama3_1/license/) and [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
## Authors
Here are the team members:
- From [Tokyo Institute of Technology Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Youmi Ma](https://www.nlp.c.titech.ac.jp/member/youmi.en.html)
- [Koki Maeda](https://sites.google.com/view/silviase)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://sites.google.com/view/masanariohi)
- [Taihei Shiotani](https://github.com/inatoihs)
- [Koshiro Saito](https://sites.google.com/view/koshiro-saito)
- From [Tokyo Institute of Technology YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
- [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto)
- [Ishida Shigeki](https://www.wantedly.com/id/reborn27)
- From [Artificial Intelligence Research Center, AIST, Japan](https://www.airc.aist.go.jp/en/teams/), the following members:
- [Hiroya Takamura](https://sites.google.com/view/hjtakamura)
## How to cite
If you find our work helpful, please feel free to cite these papers.
```
@inproceedings{Fujii:COLM2024,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation:
Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki
Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae
Mizuki and Rio Yokota and Naoaki Okazaki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@inproceedings{Okazaki:COLM2024,
title={Building a Large Japanese Web Corpus for Large Language Models},
author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki
Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay
Loem and Rio Yokota and Sakae Mizuki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
```
### References
```tex
@misc{dubey2024llama3herdmodels,
title={The Llama 3 Herd of Models},
author={Abhimanyu Dubey and Abhinav Jauhri and Abhinav Pandey and Abhishek Kadian and Ahmad Al-Dahle and Aiesha Letman and Akhil Mathur and Alan Schelten and Amy Yang and Angela Fan et al.},
year={2024},
eprint={2407.21783},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2407.21783},
}
```
|
ooliverz/git-large-r-coco-IDB2-VAtlasv2 | ooliverz | 2025-03-31T15:33:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"git",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-03-31T15:30:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ZAIR-X/MT-SLM-7B | ZAIR-X | 2025-03-31T15:33:21Z | 2 | 1 | null | [
"safetensors",
"mistral",
"jaiyeshchahar/ChatingDeveloper-7B-slerp",
"jaiyeshchahar/storywriter-mathematician",
"base_model:jaiyeshchahar/ChatingDeveloper-7B-slerp",
"base_model:finetune:jaiyeshchahar/ChatingDeveloper-7B-slerp",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-28T06:51:09Z | ---
license: apache-2.0
base_model:
- jaiyeshchahar/ChatingDeveloper-7B-slerp
- jaiyeshchahar/storywriter-mathematician
tags:
- jaiyeshchahar/ChatingDeveloper-7B-slerp
- jaiyeshchahar/storywriter-mathematician
---
# MT-SLM-7B
MT-SLM-7B is a mixture of experts model,a well-rounded AI capable of handling diverse tasks. It excels in coding, mathematical problem-solving, storytelling, and general-purpose chat interactions.
## 🧩 Components
MT-SLM-7B consists of four experts:
1. **Mathematics Expert**
Finetuned for mathematical reasoning and problem-solving.
2. **Coding Expert**
Finetuned for generating high-quality Python and general programming code.
3. **Chat Expert**
A general-purpose conversational AI for everyday interactions.
4. **Storytelling Expert**
Finetuned for generating creative and engaging stories.
## 🛠️ Model Configuration
This model supports an **8k context window** for extended interactions.
## 🚀 Usage
### 1. Install Dependencies
Install the required libraries using pip:
```bash
pip install -qU transformers accelerate
```
### 2. Load the Model and Generate Text
Below is an example Python script to load the model and generate text:
```python
from transformers import AutoTokenizer
import transformers
import torch
# Specify the model name
model = "ZAIR-X/MT-SLM-7B"
# Define your conversation as a list of messages
messages = [{"role": "user", "content": "What is a large language model?"}]
# Initialize the tokenizer and prepare the prompt
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Set up the text generation pipeline
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
# Generate text output
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
### 3. Example Use Cases
- **Article Explanation:** Summarize and explain complex articles.
- **Coding Assistance:** Generate, debug, and explain Python code.
- **Mathematical Problem Solving:** Handle computations and logical reasoning.
- **Creative Storytelling:** Craft engaging narratives and role-play scenarios.
## 🎯 Conclusion
MT-SLM-7B is a powerful, well-rounded assistant that leverages a mixture of expert models to deliver exceptional performance across various domains. Whether you need a reliable coding companion, a math tutor, or a creative storyteller, this model is designed to meet your needs. Try it out and experience the full range of its capabilities!
Happy generating! 🚀
|
mlc-ai/gemma-3-27b-it-q4f32_1-MLC | mlc-ai | 2025-03-31T15:31:40Z | 5 | 0 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-27b-it",
"base_model:quantized:google/gemma-3-27b-it",
"region:us"
]
| null | 2025-03-24T04:51:16Z | ---
library_name: mlc-llm
base_model: google/gemma-3-27b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-27b-it-q4f32_1-MLC
This is the [gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it) model in MLC format `q4f32_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-27b-it-q4f32_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-27b-it-q4f32_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-27b-it-q4f32_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
okita-souji/q-Taxi-v3 | okita-souji | 2025-03-31T15:31:38Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-03-31T15:31:33Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="okita-souji/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mlc-ai/gemma-3-27b-it-q4bf16_0-MLC | mlc-ai | 2025-03-31T15:31:26Z | 20 | 1 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-27b-it",
"base_model:quantized:google/gemma-3-27b-it",
"region:us"
]
| null | 2025-03-17T05:53:17Z | ---
library_name: mlc-llm
base_model: google/gemma-3-27b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-27b-it-q4bf16_0-MLC
This is the [gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it) model in MLC format `q4bf16_0`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-27b-it-q4bf16_0-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-27b-it-q4bf16_0-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-27b-it-q4bf16_0-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
mlc-ai/gemma-3-12b-it-q4f32_1-MLC | mlc-ai | 2025-03-31T15:29:35Z | 4 | 0 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"region:us"
]
| null | 2025-03-24T04:17:39Z | ---
library_name: mlc-llm
base_model: google/gemma-3-12b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-12b-it-q4f32_1-MLC
This is the [gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) model in MLC format `q4f32_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-12b-it-q4f32_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-12b-it-q4f32_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-12b-it-q4f32_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
mlc-ai/gemma-3-12b-it-q4bf16_0-MLC | mlc-ai | 2025-03-31T15:29:28Z | 13 | 1 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"region:us"
]
| null | 2025-03-17T05:18:22Z | ---
library_name: mlc-llm
base_model: google/gemma-3-12b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-12b-it-q4bf16_0-MLC
This is the [gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) model in MLC format `q4bf16_0`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-12b-it-q4bf16_0-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-12b-it-q4bf16_0-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-12b-it-q4bf16_0-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
medmekk/Llama-3.2-1B-ao-int8wo-gs128 | medmekk | 2025-03-31T15:29:26Z | 0 | 0 | null | [
"pytorch",
"llama",
"base_model:medmekk/Llama-3.2-1B-ao-int8wo-gs128",
"base_model:quantized:medmekk/Llama-3.2-1B-ao-int8wo-gs128",
"torchao",
"region:us"
]
| null | 2025-03-31T15:28:59Z | ---
base_model:
- medmekk/Llama-3.2-1B-ao-int8wo-gs128
---
# medmekk/Llama-3.2-1B-ao-int8wo-gs128 (Quantized)
## Description
This model is a quantized version of the original model [`medmekk/Llama-3.2-1B-ao-int8wo-gs128`](https://huggingface.co/medmekk/Llama-3.2-1B-ao-int8wo-gs128).
It's quantized using the TorchAO library using the [torchao-my-repo](https://huggingface.co/spaces/pytorch/torchao-my-repo) space.
## Quantization Details
- **Quantization Type**: int8_weight_only
- **Group Size**: 128
|
mlc-ai/gemma-3-12b-it-q4bf16_1-MLC | mlc-ai | 2025-03-31T15:29:21Z | 18 | 2 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"region:us"
]
| null | 2025-03-17T05:18:58Z | ---
library_name: mlc-llm
base_model: google/gemma-3-12b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-12b-it-q4bf16_1-MLC
This is the [gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) model in MLC format `q4bf16_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-12b-it-q4bf16_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-12b-it-q4bf16_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-12b-it-q4bf16_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
mlc-ai/gemma-3-4b-it-q4f16_1-MLC | mlc-ai | 2025-03-31T15:28:31Z | 11 | 0 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-4b-it",
"base_model:quantized:google/gemma-3-4b-it",
"region:us"
]
| null | 2025-03-24T04:06:17Z | ---
library_name: mlc-llm
base_model: google/gemma-3-4b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-4b-it-q4f16_1-MLC
This is the [gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) model in MLC format `q4f16_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-4b-it-q4f16_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-4b-it-q4f16_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-4b-it-q4f16_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
mlc-ai/gemma-3-4b-it-q4f32_1-MLC | mlc-ai | 2025-03-31T15:28:23Z | 11 | 0 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-4b-it",
"base_model:quantized:google/gemma-3-4b-it",
"region:us"
]
| null | 2025-03-24T04:05:04Z | ---
library_name: mlc-llm
base_model: google/gemma-3-4b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-4b-it-q4f32_1-MLC
This is the [gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) model in MLC format `q4f32_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-4b-it-q4f32_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-4b-it-q4f32_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-4b-it-q4f32_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
mlc-ai/gemma-3-1b-it-q4f32_1-MLC | mlc-ai | 2025-03-31T15:26:31Z | 5 | 0 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-1b-it",
"base_model:quantized:google/gemma-3-1b-it",
"region:us"
]
| null | 2025-03-24T03:59:43Z | ---
library_name: mlc-llm
base_model: google/gemma-3-1b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-1b-it-q4f32_1-MLC
This is the [gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) model in MLC format `q4f32_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-1b-it-q4f32_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-1b-it-q4f32_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-1b-it-q4f32_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
mlc-ai/gemma-3-1b-it-q0f16-MLC | mlc-ai | 2025-03-31T15:26:13Z | 6 | 0 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-1b-it",
"base_model:quantized:google/gemma-3-1b-it",
"region:us"
]
| null | 2025-03-24T04:01:29Z | ---
library_name: mlc-llm
base_model: google/gemma-3-1b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-1b-it-q0f16-MLC
This is the [gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) model in MLC format `q0f16`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-1b-it-q0f16-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-1b-it-q0f16-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-1b-it-q0f16-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
mlc-ai/gemma-3-1b-it-q4bf16_1-MLC | mlc-ai | 2025-03-31T15:25:52Z | 17 | 1 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:google/gemma-3-1b-it",
"base_model:quantized:google/gemma-3-1b-it",
"region:us"
]
| null | 2025-03-17T04:58:05Z | ---
library_name: mlc-llm
base_model: google/gemma-3-1b-it
tags:
- mlc-llm
- web-llm
---
# gemma-3-1b-it-q4bf16_1-MLC
This is the [gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) model in MLC format `q4bf16_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/gemma-3-1b-it-q4bf16_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/gemma-3-1b-it-q4bf16_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/gemma-3-1b-it-q4bf16_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
AlexeyShevcov/lilygrow121 | AlexeyShevcov | 2025-03-31T15:25:24Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-03-31T15:25:18Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: LILYGROW
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# LILYGROW
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `LILYGROW` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
lerobot/pi0fast_base | lerobot | 2025-03-31T15:25:11Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"arxiv:2501.09747",
"license:apache-2.0",
"region:us"
]
| robotics | 2025-03-31T15:11:24Z | ---
license: apache-2.0
library_name: lerobot
pipeline_tag: robotics
---
π0+FAST: Efficient Action Tokenization for Vision-Language-Action Models
[Paper](https://arxiv.org/abs/2501.09747)
[Jax code](https://github.com/Physical-Intelligence/openpi)
Designed by Physical Intelligence. Ported from Jax by Hugging Face.
Example of finetuning the pi0+FAST pretrained model (`pi0_fast_base` in `openpi`):
```bash
python lerobot/scripts/train.py \
--policy.path=lerobot/pi0fast_base \
--dataset.repo_id=danaaubakirova/koch_test
```
Example of training the pi0+FAST neural network with from scratch:
```bash
python lerobot/scripts/train.py \
--policy.type=pi0fast \
--dataset.repo_id=danaaubakirova/koch_test
```
Example of using the pi0 pretrained model outside LeRobot training framework:
```python
policy = PI0FASTPolicy.from_pretrained("lerobot/pi0fast_base") |
nathanialhunt2000/51da1d48-5bc1-486e-8386-c5ca6c4869fa | nathanialhunt2000 | 2025-03-31T15:22:46Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"region:us"
]
| null | 2025-03-31T15:22:29Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/SmolLM-360M-Instruct
model-index:
- name: nathanialhunt2000/51da1d48-5bc1-486e-8386-c5ca6c4869fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/51da1d48-5bc1-486e-8386-c5ca6c4869fa
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
BFS-Search/llama-3.1_Wikidata_negative_instruction_tuned | BFS-Search | 2025-03-31T15:21:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T15:21:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alikShepot/corporate_illustration_LoRA | alikShepot | 2025-03-31T15:19:26Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-03-31T15:19:21Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: illustration in CORPORATE style,
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - alikShepot/corporate_illustration_LoRA
<Gallery />
## Model description
These are alikShepot/corporate_illustration_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use illustration in CORPORATE style, to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](alikShepot/corporate_illustration_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
NCCUTAT/T5_nolora33 | NCCUTAT | 2025-03-31T15:18:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-03-31T15:18:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BigSmiley7/Reinforce-Copter_V1 | BigSmiley7 | 2025-03-31T15:18:27Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-03-31T08:19:02Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Copter_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.90 +/- 29.59
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Jonjew/PrismPulse | Jonjew | 2025-03-31T15:16:46Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T15:16:27Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
A trickster spirit emerging from the shadows, its form shifting and changing
with swirling abstract designs. The air around it is alive with vibrant,
chaotic splatter patterns and glowing, flowing brushstrokes., a diclrbrst
burst of colors, <lora:Color-Burst_v20-000084:1>
output:
url: images/00057-2025-02-18-263513326.png
- text: >-
A girl that is a ethereal beauty, 1girl, a goddess with flowing robes and a
radiant aura, celestial grace, intricate details, a diclrbrst burst of
colors, <lora:Color-Burst_v20-000084:1>
output:
url: images/00007-2025-02-18-3392014079.png
- text: >-
A techno-shaman adorned in bio-luminescent tribal markings stands in a
crystalline cave pulsating with holographic energy. The cavern walls shift
like liquid code, responding to the rhythmic chants reverberating through
the space. The figure's cybernetic staff crackles with quantum resonance,
linking their spirit to the vast intergalactic data streams that pulse
beyond the veil of reality. The scene is awash in glowing blue and violet
tones, captured from a slightly elevated side view for an immersive,
ritualistic feel., a diclrbrst burst of colors,
<lora:Color-Burst_v20-000084:1>
output:
url: images/00147-2025-02-18-2077095825.png
- text: >-
Text that says "Prism Pulse" across the top in energetic and colorful font,
underneath it is A woman with a dress adorned in dragon-scale patterns, each
detail highlighted by glowing tendrils of energy. Behind her, a swirling
background of abstract celestial bodies moves in fluid, spiraling
formations., a diclrbrst burst of colors, <lora:Color-Burst_v20-000084:1>
output:
url: images/27141-Color Burst v20-84.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: a diclrbrst burst of colors
license: unknown
---
# Prism Pulse
<Gallery />
## Model description
FROM https://civitai.com/models/1357407/prism-pulse?modelVersionId=1533361
Trigger a diclrbrst burst of colors
Strength 1
Prism Pulse is a LoRA designed to infuse your images with dynamic energy, vibrant color explosions, and radiant rainbow light effects. These bursts of color bring any composition to life, adding motion, abstract energy, and surreal vibrancy to your generations. Perfect for electrifying scenes with dazzling, high-impact visuals.
Usage
To use the most recent version of the LoRA, use the following settings:
Trigger word: diclrbrst, as in "a diclrbrst burst of colors"
Other tokens that work well: describing colorful imagery works well, but the model likes particular keywords or phrases like: swirling, energetic bursts of color, vibrant, chaotic splatter, bio-luminescent, awash in glowing [colors]
Lora Strength: A strength between 0.8 and 1.2 is recommended. It can be fun to really turn it up, but most images I created were made with a strength set at 1.
For differences in previous versions, see the version notes to the right.
## Trigger words
You should use `a diclrbrst burst of colors` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/PrismPulse/tree/main) them in the Files & versions tab.
|
qwzy123/DeepSeek-R1-Medical-COT | qwzy123 | 2025-03-31T15:15:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T01:24:14Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xpower1991/model | xpower1991 | 2025-03-31T15:10:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:11:02Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xpower1991
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sofia-gb/fashionclip-finetuned2 | Sofia-gb | 2025-03-31T15:09:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
]
| feature-extraction | 2025-03-31T14:52:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
silviasapora/gemma-7b-sft-cpo-basic-5e-7-005-v140 | silviasapora | 2025-03-31T15:08:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T14:39:46Z | ---
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for /home/silvias/docker/alignment-handbook/data/gemma-7b-sft-basic-5e-5-00-v130-full
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-sft-cpo-basic-5e-7-005-v140", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/599hkvi6)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/Sao10K_-_L3-8B-Stheno-v3.2-8bits | RichardErkhov | 2025-03-31T15:05:02Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-03-31T14:58:37Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
L3-8B-Stheno-v3.2 - bnb 8bits
- Model creator: https://huggingface.co/Sao10K/
- Original model: https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2/
Original model description:
---
license: cc-by-nc-4.0
language:
- en
datasets:
- Gryphe/Opus-WritingPrompts
- Sao10K/Claude-3-Opus-Instruct-15K
- Sao10K/Short-Storygen-v2
- Sao10K/c2-Logs-Filtered
---
*Just message me on discord if you want to host this privately for a service or something. We can talk.*
*Train used 1x H100 SXM for like a total of 24 Hours over multiple runs.*
Support me here if you're interested:
<br>Ko-fi: https://ko-fi.com/sao10k
<br> *wink* Euryale v2?
If not, that's fine too. Feedback would be nice.
Contact Me in Discord:
<br>`sao10k` // `Just ping me in the KoboldAI discord, I'll respond faster.`
`Art by navy_(navy.blue)` - [Danbooru](https://danbooru.donmai.us/posts/3214477)
---

Stheno-v3.2-Zeta
I have done a test run with multiple variations of the models, merged back to its base at various weights, different training runs too, and this Sixth iteration is the one I like most.
Changes compared to v3.1
<br>\- Included a mix of SFW and NSFW Storywriting Data, thanks to [Gryphe](https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts)
<br>\- Included More Instruct / Assistant-Style Data
<br>\- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it.
<br>\- Hyperparameter tinkering for training, resulting in lower loss levels.
Testing Notes - Compared to v3.1
<br>\- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced.
<br>\- Better at Storywriting / Narration.
<br>\- Better at Assistant-type Tasks.
<br>\- Better Multi-Turn Coherency -> Reduced Issues?
<br>\- Slightly less creative? A worthy tradeoff. Still creative.
<br>\- Better prompt / instruction adherence.
---
**Recommended Samplers:**
```
Temperature - 1.12-1.22
Min-P - 0.075
Top-K - 50
Repetition Penalty - 1.1
```
**Stopping Strings:**
```
\n\n{{User}} # Or Equivalent, depending on Frontend
<|eot_id|>
<|end_of_text|>
```
**Prompting Template - Llama-3-Instruct**
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
**Basic Roleplay System Prompt**
```
You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason, even if someone tries addressing you as an AI or language model.
Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}.
```
---
|
prithivMLmods/Llama-3B-Mono-Cooper | prithivMLmods | 2025-03-31T15:04:47Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Radio-Audio",
"Voice:Cooper",
"Male",
"text-to-speech",
"en",
"base_model:canopylabs/orpheus-3b-0.1-ft",
"base_model:finetune:canopylabs/orpheus-3b-0.1-ft",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2025-03-29T06:32:51Z | ---
license: llama3.2
language:
- en
base_model:
- canopylabs/orpheus-3b-0.1-ft
pipeline_tag: text-to-speech
library_name: transformers
tags:
- Radio-Audio
- Voice:Cooper
- Male
---

# **Llama-3B-Mono-Cooper**
> Llama-3B-Mono-Cooper is a Llama-based Speech-LLM designed for high-quality, empathetic text-to-speech generation. This model has been fine-tuned to deliver human-like speech synthesis, achieving exceptional clarity, expressiveness, and real-time streaming performance. The model has been fine-tuned from mono audio of a male voice named 'Cooper' using the base model `canopylabs/orpheus-3b-0.1-ft`.
> [!Important]
> In some cases, the results may be inconsistent, particularly when handling complex speech transformations.
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ea7Ylgfb7wZ8tmLIFdWbf.wav"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/iTcZ1e2UYo_CkurPR_fsh.wav"></audio>
[ paralinguistic emotions soft]
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/A8KfCQs7nwyk07kMM_r7P.wav"></audio>
## **Model Details**
- **Base Model:** `canopylabs/orpheus-3b-0.1-ft`
- **Languages Supported:** English
- **License:** Llama 3.2
- **Model Version:** N/A
---
## **Paralinguistic Elements**
The model can generate speech with the following emotions:
| Elements | Elements | Elements |
|------------|------------|------------|
| laugh | chuckle | sigh |
| sniffle | groan | yawn |
| gasp | uhm | giggles & more |
---
## **Run with Transformers 🤝**
```python
from huggingface_hub import notebook_login, HfApi
notebook_login()
```
### **Install Dependencies**
```python
%%capture
!pip install snac accelerate
!pip install transformers
!pip install gradio
```
## **Usage**
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import gradio as gr
from snac import SNAC
def redistribute_codes(row):
"""
Convert a sequence of token codes into an audio waveform using SNAC.
The code assumes each 7 tokens represent one group of instructions.
"""
row_length = row.size(0)
new_length = (row_length // 7) * 7
trimmed_row = row[:new_length]
code_list = [t - 128266 for t in trimmed_row]
layer_1, layer_2, layer_3 = [], [], []
for i in range((len(code_list) + 1) // 7):
layer_1.append(code_list[7 * i][None])
layer_2.append(code_list[7 * i + 1][None] - 4096)
layer_3.append(code_list[7 * i + 2][None] - (2 * 4096))
layer_3.append(code_list[7 * i + 3][None] - (3 * 4096))
layer_2.append(code_list[7 * i + 4][None] - (4 * 4096))
layer_3.append(code_list[7 * i + 5][None] - (5 * 4096))
layer_3.append(code_list[7 * i + 6][None] - (6 * 4096))
with torch.no_grad():
codes = [
torch.concat(layer_1),
torch.concat(layer_2),
torch.concat(layer_3)
]
for i in range(len(codes)):
codes[i][codes[i] < 0] = 0
codes[i] = codes[i][None]
audio_hat = snac_model.decode(codes)
return audio_hat.cpu()[0, 0]
# Load the SNAC model for audio decoding
snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz").to("cuda")
# Load the single-speaker language model
tokenizer = AutoTokenizer.from_pretrained('prithivMLmods/Llama-3B-Mono-Cooper')
model = AutoModelForCausalLM.from_pretrained(
'prithivMLmods/Llama-3B-Mono-Cooper', torch_dtype=torch.bfloat16
).cuda()
def generate_audio(text, temperature, top_p, max_new_tokens):
"""
Given input text, generate speech audio.
"""
speaker = "Cooper"
prompt = f'<custom_token_3><|begin_of_text|>{speaker}: {text}<|eot_id|><custom_token_4><custom_token_5><custom_token_1>'
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').to('cuda')
with torch.no_grad():
generated_ids = model.generate(
**input_ids,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_p=top_p,
repetition_penalty=1.1,
num_return_sequences=1,
eos_token_id=128258,
)
row = generated_ids[0, input_ids['input_ids'].shape[1]:]
y_tensor = redistribute_codes(row)
y_np = y_tensor.detach().cpu().numpy()
return (24000, y_np)
# Gradio Interface
with gr.Blocks() as demo:
gr.Markdown("# Llama-3B-Mono-Cooper - Single Speaker Audio Generation")
gr.Markdown("Generate speech audio using the `prithivMLmods/Llama-3B-Mono-Cooper` model.")
with gr.Row():
text_input = gr.Textbox(lines=4, label="Input Text")
with gr.Row():
temp_slider = gr.Slider(minimum=0.1, maximum=2.0, step=0.1, value=0.9, label="Temperature")
top_p_slider = gr.Slider(minimum=0.1, maximum=1.0, step=0.05, value=0.8, label="Top-p")
tokens_slider = gr.Slider(minimum=100, maximum=2000, step=50, value=1200, label="Max New Tokens")
output_audio = gr.Audio(type="numpy", label="Generated Audio")
generate_button = gr.Button("Generate Audio")
generate_button.click(
fn=generate_audio,
inputs=[text_input, temp_slider, top_p_slider, tokens_slider],
outputs=output_audio
)
if __name__ == "__main__":
demo.launch()
```
[ or ]
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import gradio as gr
from snac import SNAC
def redistribute_codes(row):
"""
Convert a sequence of token codes into an audio waveform using SNAC.
The code assumes each 7 tokens represent one group of instructions.
"""
row_length = row.size(0)
new_length = (row_length // 7) * 7
trimmed_row = row[:new_length]
code_list = [t - 128266 for t in trimmed_row]
layer_1, layer_2, layer_3 = [], [], []
for i in range((len(code_list) + 1) // 7):
layer_1.append(code_list[7 * i][None])
layer_2.append(code_list[7 * i + 1][None] - 4096)
layer_3.append(code_list[7 * i + 2][None] - (2 * 4096))
layer_3.append(code_list[7 * i + 3][None] - (3 * 4096))
layer_2.append(code_list[7 * i + 4][None] - (4 * 4096))
layer_3.append(code_list[7 * i + 5][None] - (5 * 4096))
layer_3.append(code_list[7 * i + 6][None] - (6 * 4096))
with torch.no_grad():
codes = [
torch.concat(layer_1),
torch.concat(layer_2),
torch.concat(layer_3)
]
for i in range(len(codes)):
codes[i][codes[i] < 0] = 0
codes[i] = codes[i][None]
audio_hat = snac_model.decode(codes)
return audio_hat.cpu()[0, 0]
# Load the SNAC model for audio decoding
snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz").to("cuda")
# Load the single-speaker language model
tokenizer = AutoTokenizer.from_pretrained('prithivMLmods/Llama-3B-Mono-Cooper')
model = AutoModelForCausalLM.from_pretrained(
'prithivMLmods/Llama-3B-Mono-Cooper', torch_dtype=torch.bfloat16
).cuda()
def generate_audio(text, temperature, top_p, max_new_tokens):
"""
Given input text, generate speech audio.
"""
prompt = f'<custom_token_3><|begin_of_text|>{text}<|eot_id|><custom_token_4><custom_token_5><custom_token_1>'
input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').to('cuda')
with torch.no_grad():
generated_ids = model.generate(
**input_ids,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=temperature,
top_p=top_p,
repetition_penalty=1.1,
num_return_sequences=1,
eos_token_id=128258,
)
row = generated_ids[0, input_ids['input_ids'].shape[1]:]
y_tensor = redistribute_codes(row)
y_np = y_tensor.detach().cpu().numpy()
return (24000, y_np)
# Gradio Interface
with gr.Blocks() as demo:
gr.Markdown("# Llama-3B-Mono-Cooper - Single Speaker Audio Generation")
gr.Markdown("Generate speech audio using the `prithivMLmods/Llama-3B-Mono-Cooper` model.")
with gr.Row():
text_input = gr.Textbox(lines=4, label="Input Text")
with gr.Row():
temp_slider = gr.Slider(minimum=0.1, maximum=2.0, step=0.1, value=0.9, label="Temperature")
top_p_slider = gr.Slider(minimum=0.1, maximum=1.0, step=0.05, value=0.8, label="Top-p")
tokens_slider = gr.Slider(minimum=100, maximum=2000, step=50, value=1200, label="Max New Tokens")
output_audio = gr.Audio(type="numpy", label="Generated Audio")
generate_button = gr.Button("Generate Audio")
generate_button.click(
fn=generate_audio,
inputs=[text_input, temp_slider, top_p_slider, tokens_slider],
outputs=output_audio
)
if __name__ == "__main__":
demo.launch()
```
---
## **Intended Use**
- Designed for high-quality, single-speaker text-to-speech generation.
- Ideal for applications requiring human-like speech synthesis.
- Supports a range of emotions for expressive speech output.
- Suitable for AI voice assistants, storytelling, and accessibility applications. |
chatpig/gemma-3-27b-it-gguf | chatpig | 2025-03-31T15:02:44Z | 37 | 0 | null | [
"gguf",
"gguf-connector",
"image-text-to-text",
"base_model:google/gemma-3-27b-it",
"base_model:quantized:google/gemma-3-27b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
]
| image-text-to-text | 2025-03-30T11:29:33Z | ---
license: gemma
base_model:
- google/gemma-3-27b-it
pipeline_tag: image-text-to-text
tags:
- gguf-connector
---
# gemma-3-27b-it-gguf
- base model from google
- original safetensors [here](https://huggingface.co/callgg/gemma-3-27b-it-bf16)
- for text/image-text-to-text generation |
wind-strider/emotion-detection | wind-strider | 2025-03-31T15:02:14Z | 0 | 1 | null | [
"region:us"
]
| null | 2025-03-31T14:05:56Z | applied to https://github.com/Freshmanwqwe/emotion-recognition |
iTroned/bert_shap_test_v1 | iTroned | 2025-03-31T14:59:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:52:46Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_shap_test_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/htkcjo0z)
# bert_shap_test_v1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
greatnomadicseal/ppo-Huggy | greatnomadicseal | 2025-03-31T14:56:57Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2025-03-31T14:56:52Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: greatnomadicseal/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Daksh1/ree2 | Daksh1 | 2025-03-31T14:55:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:55:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_rand1_randA_vgtbls_naive_r_0p25_seed_42_20250331_140553 | gradientrouting-spar | 2025-03-31T14:55:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:54:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ISeeber04/ppo-Huggy | ISeeber04 | 2025-03-31T14:54:46Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2025-03-31T14:54:24Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ISeeber04/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tyrael147/ei39_test_100 | tyrael147 | 2025-03-31T14:53:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:53:36Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tyrael147
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ItsMaxNorm/MedAgentSim-datasets | ItsMaxNorm | 2025-03-31T14:53:05Z | 0 | 0 | null | [
"text-generation",
"en",
"arxiv:2405.07960",
"arxiv:2503.22678",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"region:us"
]
| text-generation | 2025-03-31T14:14:14Z | ---
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.3-70B-Instruct
pipeline_tag: text-generation
---
# MedAgentSim Datasets
GitHub: [https://github.com/MAXNORM8650/MedAgentSim](https://github.com/MAXNORM8650/MedAgentSim)
Website: [https://medagentsim.netlify.app](https://medagentsim.netlify.app)
This repository contains various datasets used in the MedAgentSim project for simulating medical agent interactions.
## Datasets Included
- **nejm_dataset_v1.jsonl**: A dataset related to the New England Journal of Medicine (NEJM) clinical cases.
- **medqa_extended_v1.jsonl**: Extended dataset for medical question-answering tasks with comprehensive coverage.
- **medqa_v1.jsonl**: Dataset focused on general medical question-answering.
- **mimiciv_v1.jsonl**: Dataset based on the MIMIC-IV medical database with patient trajectories.
- **nejm_extended_v1.jsonl**: Extended version of the NEJM dataset with additional clinical scenarios.
## Usage
To load the datasets, simply use the following code:
```python
import json
# Example for loading a dataset
with open("dataset_filename.jsonl", "r") as f:
data = [json.loads(line) for line in f]
```
## License
This repository is under the MIT License. See the LICENSE file for more details.
## Acknowledgments
- This work was supported by the MedAgentSim project.
- The MIMIC-IV dataset is publicly available and was used for medical data simulations.
- Citation for AgentClinic:
```
@misc{schmidgall2024agentclinic,
title={AgentClinic: a multimodal agent benchmark to evaluate AI in simulated clinical environments},
author={Samuel Schmidgall and Rojin Ziaei and Carl Harris and Eduardo Reis and Jeffrey Jopling and Michael Moor},
year={2024},
eprint={2405.07960},
archivePrefix={arXiv},
primaryClass={cs.HC}
}
```
- Citation for Self-Evolving Multi-Agent Simulations:
```
@misc{almansoori2025selfevolvingmultiagentsimulationsrealistic,
title={Self-Evolving Multi-Agent Simulations for Realistic Clinical Interactions},
author={Mohammad Almansoori and Komal Kumar and Hisham Cholakkal},
year={2025},
eprint={2503.22678},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.22678},
}
```
## Contact
For any questions or inquiries, please reach out to Komal Kumar. |
iTroned/bert_8_hate_test | iTroned | 2025-03-31T14:52:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:34:08Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_8_hate_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/r28r8zy5)
# bert_8_hate_test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2677
- Accuracy Offensive: 0.9230
- F1 Offensive: 0.9196
- Accuracy Targeted: 0.9441
- F1 Targeted: 0.9173
- Accuracy Stance: 0.9079
- F1 Stance: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Offensive | F1 Offensive | Accuracy Targeted | F1 Targeted | Accuracy Stance | F1 Stance |
|:-------------:|:-----:|:-----:|:---------------:|:------------------:|:------------:|:-----------------:|:-----------:|:---------------:|:---------:|
| 0.7468 | 1.0 | 1490 | 0.6618 | 0.6850 | 0.5570 | 0.6850 | 0.5570 | 0.7409 | 0.6307 |
| 0.6154 | 2.0 | 2980 | 0.4579 | 0.7591 | 0.7025 | 0.8875 | 0.8606 | 0.8595 | 0.8190 |
| 0.4375 | 3.0 | 4470 | 0.3543 | 0.8391 | 0.8200 | 0.9305 | 0.9040 | 0.8980 | 0.8618 |
| 0.358 | 4.0 | 5960 | 0.3166 | 0.8739 | 0.8634 | 0.9388 | 0.9121 | 0.9033 | 0.8674 |
| 0.3294 | 5.0 | 7450 | 0.3014 | 0.8754 | 0.8652 | 0.9411 | 0.9143 | 0.9048 | 0.8686 |
| 0.2979 | 6.0 | 8940 | 0.2856 | 0.9086 | 0.9037 | 0.9434 | 0.9165 | 0.9071 | 0.8710 |
| 0.2854 | 7.0 | 10430 | 0.2746 | 0.9230 | 0.9196 | 0.9434 | 0.9165 | 0.9079 | 0.8717 |
| 0.2738 | 8.0 | 11920 | 0.2722 | 0.9192 | 0.9155 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2664 | 9.0 | 13410 | 0.2692 | 0.9230 | 0.9196 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2613 | 10.0 | 14900 | 0.2677 | 0.9230 | 0.9196 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
lesso01/a4401641-b0b8-499f-954d-936833b96297 | lesso01 | 2025-03-31T14:50:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-03-31T13:10:11Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4401641-b0b8-499f-954d-936833b96297
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4d391191a0d59966_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4d391191a0d59966_train_data.json
type:
field_input: input_context
field_instruction: instruction
field_output: errors
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso01/a4401641-b0b8-499f-954d-936833b96297
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000201
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/4d391191a0d59966_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 10
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 42d84aad-b019-4139-9c2c-fd168e376acc
wandb_project: 01a
wandb_run: your_name
wandb_runid: 42d84aad-b019-4139-9c2c-fd168e376acc
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4401641-b0b8-499f-954d-936833b96297
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000201
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 3.3349 |
| 0.706 | 0.4080 | 500 | 0.7077 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Daksh1/ree1 | Daksh1 | 2025-03-31T14:47:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:47:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kimang18/Llama3.2-think-4bit | Kimang18 | 2025-03-31T14:47:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
]
| text-generation | 2025-03-31T14:46:24Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Kimang18
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jonjew/VikingPrincessCFH | Jonjew | 2025-03-31T14:47:26Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-03-31T14:47:18Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
A fierce Viking woman standing on a rocky shoreline at dawn, facing the
camera with an intense, unflinching gaze. She wears a dusty crimson and bone
white fur-trimmed V1k1nG_Pr1nC3sS outfit, featuring a battle-ready aged
bronze sculpted corset, a split skirt with worn leather textures, tall
distressed boots, and a heavy fur-lined cape flowing behind her in the wind.
Her long wavy brown hair is wild and windblown. A bold black war paint
stripe runs across her eyes, giving her a fierce, intimidating appearance.
In the background, a Viking longship with a red and white striped sail
floats in the water, silhouetted against a dramatic orange sunrise. She
grips a bloodstained axe in one hand and stands over a beach scattered with
broken shields and debris. The lighting is cold and natural, with
photorealistic textures, cinematic shadows, and gritty realism throughout
the scene.<lora:Viking_Princess_CFH.safetensors:1.0:1.0>
parameters:
negative_prompt: >-
A fierce Viking woman standing on a rocky shoreline at dawn, facing the
camera with an intense, unflinching gaze. She wears a dusty crimson and
bone white fur-trimmed V1k1nG_Pr1nC3sS outfit, featuring a battle-ready
aged bronze sculpted corset, a split skirt with worn leather textures,
tall distressed boots, and a heavy fur-lined cape flowing behind her in
the wind. Her long wavy brown hair is wild and windblown. A bold black war
paint stripe runs across her eyes, giving her a fierce, intimidating
appearance. In the background, a Viking longship with a red and white
striped sail floats in the water, silhouetted against a dramatic orange
sunrise. She grips a bloodstained axe in one hand and stands over a beach
scattered with broken shields and debris. The lighting is cold and
natural, with photorealistic textures, cinematic shadows, and gritty
realism throughout the scene.
output:
url: images/FLUX_0008.png
- text: >-
A confident Viking princess standing in a grand medieval throne room. She
wears a red and gold fur-trimmed V1k1nG_Pr1nC3sS outfit, featuring an ornate
corset and a floor-length cream gown with golden accents. Her golden heels
peek out from under the gown as she stands tall, one hand on her hip. A
jeweled silver tiara rests on her head, and her long wavy brown hair flows
past her shoulders. The lighting is rich and warm, casting soft shadows
across the stone walls and pillars. The setting is regal and detailed, with
banners, torchlight, and carved woodwork. Full-body shot, cinematic and
photorealistic.<lora:Viking_Princess_CFH-000007.safetensors:1.0:1.0>
parameters:
negative_prompt: >-
A confident Viking princess standing in a grand medieval throne room. She
wears a red and gold fur-trimmed V1k1nG_Pr1nC3sS outfit, featuring an
ornate corset and a floor-length cream gown with golden accents. Her
golden heels peek out from under the gown as she stands tall, one hand on
her hip. A jeweled silver tiara rests on her head, and her long wavy brown
hair flows past her shoulders. The lighting is rich and warm, casting soft
shadows across the stone walls and pillars. The setting is regal and
detailed, with banners, torchlight, and carved woodwork. Full-body shot,
cinematic and photorealistic.
output:
url: images/FLUX_0017.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: V1k1nG_Pr1nC3sS
license: unknown
---
# Viking Princess CFH
<Gallery />
## Model description
FROM https://civitai.com/models/1411935/viking-princess-cfh?modelVersionId=1596169
Trrigger V1k1nG_Pr1nC3sS
Strength: 0.8
👑 Viking Princess
This model embodies the untouchable power, seduction, and elegance of a fantasy realm’s most dangerous monarch. The Viking Princess is no shieldmaiden — she doesn’t fight in the mud. She commands armies in red-and-gold royalty, draped in furs and silk, corseted like a dream, and walking like the battlefield is the runway.
Load up V1k1nG_Pr1nC3sS and you’ll summon a voluptuous, commanding figure in an ornate outfit: red embroidered corset, gold cape trimmed in fur, cream gown flowing to the floor, and a gemstone pendant nestled between royal-tier cleavage. This LoRA is perfect for fantasy queens, elven nobility, magic users, or just anyone who looks like they could make a king kneel with one raised eyebrow.
🎯 Features & Capabilities:
✔ Royal fantasy aesthetic – queen, elf, goddess, ruler vibes
✔ Full-body compatible – works front, side, rear, standing, seated, all angles
✔ Outfit accuracy – corset detail, fur-lined gold cape, cream/gold gown
✔ Gemstone pendant rendering – iconic red jewel at the chest
✔ Horned tiara support – silver circlet with curved horns
✔ Elven ear compatibility – peeking through long wavy hair
✔ Works with dark & light backgrounds – crisp contrast on either
✔ Supports close-ups & portrait crops – seductive power in every frame
✔ Camera-friendly from any fantasy scene – thrones, forests, battlefields, or temples
🛠 Recommended Triggers & Tags:
Primary Trigger Word:
V1k1nG_Pr1nC3sS (Essential to activate the outfit and shape)
Helpful Enhancers:
Physical Form & Outfit:
(color) corset, fur-trimmed cape, (color) gown, long sleeves, high heels, horned tiara
Visual Detail Prompts:
jewel pendant, ornate corset, fur shoulders, embroidered fabric, fantasy queen, elven ears
Scene & Body Framing:
full body shot, standing pose, front view, side profile, confident stance, regal posture, close-up portrait, soft fantasy lighting
🎨 Recommended Color Combos:
❤️ Red & Gold – dominant corset combo, ornate and regal
🌕 Gold & Cream – cape and gown, soft royal aesthetic
💎 Red Gemstone – pendant centerpiece between breasts
👢 Gold Heels – elegant foot finish
👩🦰 Brown / Auburn Hair – natural, flowing, royal as fuck
## Trigger words
You should use `V1k1nG_Pr1nC3sS` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/VikingPrincessCFH/tree/main) them in the Files & versions tab.
|
Parthiban007/llama-3.1-R1 | Parthiban007 | 2025-03-31T14:45:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:45:19Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Parthiban007
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
goarrrrrr/NewAgents | goarrrrrr | 2025-03-31T14:44:53Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2025-03-31T14:44:52Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a new agents in Valorant style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - goarrrrrr/NewAgents
<Gallery />
## Model description
These are goarrrrrr/NewAgents LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a new agents in Valorant style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](goarrrrrr/NewAgents/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
RichardErkhov/ytu-ce-cosmos_-_Turkish-Llama-8b-v0.1-8bits | RichardErkhov | 2025-03-31T14:44:51Z | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-03-31T14:38:27Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Turkish-Llama-8b-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/ytu-ce-cosmos/
- Original model: https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-v0.1/
Original model description:
---
license: llama3
language:
- tr
pipeline_tag: text-generation
base_model: meta-llama/Meta-Llama-3-8B
tags:
- Turkish
- turkish
- Llama
- Llama3
---
<img src="./CosmosLlaMa.png" width="400px"/>
# Cosmos LLaMa
This model is a fully fine-tuned version of the LLaMA-3 8B model with a 30GB Turkish dataset.
The Cosmos LLaMa is designed for text generation tasks, providing the ability to continue a given text snippet in a coherent and contextually relevant manner. Due to the diverse nature of the training data, which includes websites, books, and other text sources, this model can exhibit biases. Users should be aware of these biases and use the model responsibly.
## Example Usage
Here is an example of how to use the model in colab:
```python
!pip install -U accelerate bitsandbytes
```
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
from transformers import BitsAndBytesConfig
import time
model_name = "ytu-ce-cosmos/Turkish-Llama-8b-v0.1"
bnb_config = BitsAndBytesConfig(
load_in_8bit=True,
bnb_8bit_compute_dtype=torch.bfloat16,
load_in_8bit_fp32_cpu_offload=True,
device_map = 'auto'
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
quantization_config=bnb_config,
)
```
```python
text_generator = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
temperature=0.3,
repetition_penalty=1.1,
top_p=0.9,
max_length=610,
do_sample=True,
return_full_text=False,
min_new_tokens=32
)
```
```python
text = """Yapay zeka hakkında 3 tespit yaz.\n"""
r = text_generator(text)
print(r[0]['generated_text'])
"""
1. Yapay Zeka (AI), makinelerin insan benzeri bilişsel işlevleri gerçekleştirmesini sağlayan bir teknoloji alanıdır.
2. Yapay zekanın geliştirilmesi ve uygulanması, sağlık hizmetlerinden eğlenceye kadar çeşitli sektörlerde çok sayıda fırsat sunmaktadır.
3. Yapay zeka teknolojisinin potansiyel faydaları önemli olsa da mahremiyet, işten çıkarma ve etik hususlar gibi konularla ilgili endişeler de var.
"""
```
# Acknowledgments
- Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
- Computing resources used in this work were provided by the National Center for High Performance Computing of Turkey (UHeM) under grant numbers 1016912023 and
1018512024
- Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
### Contact
COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department <br>
https://cosmos.yildiz.edu.tr/ <br>
[email protected]
# Citation
```bibtex
@inproceedings{kesgin2024optimizing,
title={Optimizing Large Language Models for Turkish: New Methodologies in Corpus Selection and Training},
author={Kesgin, H Toprak and Yuce, M Kaan and Dogan, Eren and Uzun, M Egemen and Uz, Atahan and {\.I}nce, Elif and Erdem, Yusuf and Shbib, Osama and Zeer, Ahmed and Amasyali, M Fatih},
booktitle={2024 Innovations in Intelligent Systems and Applications Conference (ASYU)},
pages={1--6},
year={2024},
organization={IEEE}
}
```
---
license: llama3
---
|
MichelNivard/A100Bert-v0.1 | MichelNivard | 2025-03-31T14:44:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-03-31T14:11:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TareksLab/RolePlayer-V3-LLaMa-70B | TareksLab | 2025-03-31T14:44:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4",
"base_model:merge:ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:Sao10K/Llama-3.3-70B-Vulpecula-r1",
"base_model:merge:Sao10K/Llama-3.3-70B-Vulpecula-r1",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-31T13:59:00Z | ---
base_model:
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- Sao10K/Llama-3.3-70B-Vulpecula-r1
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3) as a base.
### Models Merged
The following models were included in the merge:
* [ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4)
* [Sao10K/Llama-3.3-70B-Vulpecula-r1](https://huggingface.co/Sao10K/Llama-3.3-70B-Vulpecula-r1)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sao10K/Llama-3.3-70B-Vulpecula-r1
parameters:
weight: 0.25
density: 0.5
epsilon: 0.05
lambda: 1.0
- model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
parameters:
weight: 0.25
density: 0.5
epsilon: 0.05
lambda: 1.0
- model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
parameters:
weight: 0.25
density: 0.5
epsilon: 0.05
lambda: 1.0
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.25
density: 0.5
epsilon: 0.05
lambda: 1.0
merge_method: della
base_model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
chat_template: llama3
tokenizer:
source: union
```
|
pi-de-pie/results | pi-de-pie | 2025-03-31T14:41:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-03-31T14:40:23Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 12.3359
- Bleu: 0.2407
- Chrf: 6.2796
- Ter: 132.0388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Ter |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|
| No log | 1.0 | 2 | 12.9096 | 0.2594 | 6.3668 | 127.1845 |
| No log | 2.0 | 4 | 12.6801 | 0.2472 | 6.4225 | 132.0388 |
| No log | 3.0 | 6 | 12.5083 | 0.2290 | 6.3274 | 137.3786 |
| No log | 4.0 | 8 | 12.3900 | 0.2325 | 6.2686 | 135.4369 |
| 12.7423 | 5.0 | 10 | 12.3359 | 0.2407 | 6.2796 | 132.0388 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
sshkeda/beans-0-1.5B | sshkeda | 2025-03-31T14:40:38Z | 170 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-14T18:50:26Z | ---
library_name: transformers
license: other
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: beans-0-1.5B
results: []
---
# Model Card for beans-0-1.5B
An LLM trained to reason about legal chess moves.
### Model Description
- **Developed by:** Stephen Shkeda
- **License:** MIT
- **Finetuned from model:** [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B)
### Model Sources
- **Repository:** https://github.com/sshkeda/beans
- **Training data:** https://huggingface.co/datasets/sshkeda/beans-0-dataset.json
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
jnian/Qwen2.5-7B-Instruct-Open-R1-GRPO-easy_query-100k | jnian | 2025-03-31T14:38:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:SCU-IR/easy_query_hard_doc_msmarco_level2_GRPO",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-03-29T23:29:51Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
datasets: SCU-IR/easy_query_hard_doc_msmarco_level2_GRPO
library_name: transformers
model_name: Qwen2.5-7B-Instruct-Open-R1-GRPO-easy_query-100k
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-Open-R1-GRPO-easy_query-100k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the [SCU-IR/easy_query_hard_doc_msmarco_level2_GRPO](https://huggingface.co/datasets/SCU-IR/easy_query_hard_doc_msmarco_level2_GRPO) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jnian/Qwen2.5-7B-Instruct-Open-R1-GRPO-easy_query-100k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zpeng/ReasonRank/runs/0pfpmwen)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/LN-Korean-14B-v0.2-GGUF | mradermacher | 2025-03-31T14:36:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"ko",
"zh",
"base_model:SakuraLLM/LN-Korean-14B-v0.2",
"base_model:quantized:SakuraLLM/LN-Korean-14B-v0.2",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-03-31T11:14:22Z | ---
base_model: SakuraLLM/LN-Korean-14B-v0.2
language:
- ko
- zh
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SakuraLLM/LN-Korean-14B-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q2_K.gguf) | Q2_K | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q3_K_S.gguf) | Q3_K_S | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q3_K_L.gguf) | Q3_K_L | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.IQ4_XS.gguf) | IQ4_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q4_K_M.gguf) | Q4_K_M | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q5_K_S.gguf) | Q5_K_S | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q6_K.gguf) | Q6_K | 12.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LN-Korean-14B-v0.2-GGUF/resolve/main/LN-Korean-14B-v0.2.Q8_0.gguf) | Q8_0 | 15.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CYHcyh66/AI_assistant-3 | CYHcyh66 | 2025-03-31T14:32:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:29:36Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** CYHcyh66
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abdwahdia/mistral_7b_islam_qa_dataset | abdwahdia | 2025-03-31T14:32:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:30:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iTroned/bert_32_hate_test | iTroned | 2025-03-31T14:31:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:13:25Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_32_hate_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/wfyyg33h)
# bert_32_hate_test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2067
- Accuracy Offensive: 0.9441
- F1 Offensive: 0.9425
- Accuracy Targeted: 0.9441
- F1 Targeted: 0.9173
- Accuracy Stance: 0.9079
- F1 Stance: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Offensive | F1 Offensive | Accuracy Targeted | F1 Targeted | Accuracy Stance | F1 Stance |
|:-------------:|:-----:|:-----:|:---------------:|:------------------:|:------------:|:-----------------:|:-----------:|:---------------:|:---------:|
| 0.5996 | 1.0 | 1490 | 0.3087 | 0.9434 | 0.9417 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2753 | 2.0 | 2980 | 0.2483 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2273 | 3.0 | 4470 | 0.2234 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2078 | 4.0 | 5960 | 0.2190 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.2054 | 5.0 | 7450 | 0.2173 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.1939 | 6.0 | 8940 | 0.2089 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.1945 | 7.0 | 10430 | 0.2070 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.1846 | 8.0 | 11920 | 0.2069 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.1827 | 9.0 | 13410 | 0.2067 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
| 0.1763 | 10.0 | 14900 | 0.2068 | 0.9441 | 0.9425 | 0.9441 | 0.9173 | 0.9079 | 0.8717 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
ashutosh-vp/mistral-lora-finetuned-18k-split | ashutosh-vp | 2025-03-31T14:27:04Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-03-31T14:26:02Z | # mistral-lora-finetuned-18k-split
Fine-tuned Mistral LoRA model uploaded by ashutosh-vp. |
Ruoqizeng/1ruoqi | Ruoqizeng | 2025-03-31T14:26:10Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-03-31T14:26:08Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Ruoqi
---
# 1Ruoqi
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Ruoqi` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Ruoqizeng/1ruoqi', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Rupesh2/llama-3.2-3B-NLI | Rupesh2 | 2025-03-31T14:25:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-03-31T14:25:36Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rupesh2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits