modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 00:43:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 00:35:11
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kpushpender/whisper-lora-mani
|
kpushpender
| 2025-06-10T18:29:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T11:26:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Flo0620/Qwen2_5_7B_r32_a64_d0_1_ArXivQA
|
Flo0620
| 2025-06-10T18:27:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T12:08:55Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r32_a64_d0_1_ArXivQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r32_a64_d0_1_ArXivQA
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r32_a64_d0_1_ArXivQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BichuGeo/falcon-lora-imdb-causual-inference
|
BichuGeo
| 2025-06-10T18:26:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T18:26:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Satyam-Singh/lila_2
|
Satyam-Singh
| 2025-06-10T18:25:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T18:24:56Z |
---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Satyam-Singh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aymanbakiri/MNLP_M3_mcqa_dpo_model_full
|
aymanbakiri
| 2025-06-10T18:20:32Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"text-generation",
"question-answering",
"mcqa",
"merged",
"sft",
"lora",
"en",
"base_model:AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500",
"base_model:adapter:AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2025-06-10T18:20:00Z |
---
language: en
license: apache-2.0
tags:
- text-generation
- question-answering
- mcqa
- merged
- sft
- lora
base_model: AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500
---
# MNLP M3 MCQA Merged Model
This model is a merged version of:
- **Base SFT Model**: `AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500`
- **LoRA Adapter**: `aymanbakiri/MNLP_M3_mcqa_dpo_model`
## Model Description
This is a specialized model for Multiple Choice Question Answering (MCQA) tasks, created by:
1. Starting with the SFT model `AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500`
2. Fine-tuning with LoRA adapters on MCQA data
3. Merging the LoRA weights back into the base model
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("aymanbakiri/MNLP_M3_mcqa_dpo_model_full")
tokenizer = AutoTokenizer.from_pretrained("aymanbakiri/MNLP_M3_mcqa_dpo_model_full")
# Example usage for MCQA
prompt = """Question: What is the capital of France?
Options: (A) London (B) Berlin (C) Paris (D) Madrid
Answer:"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=5)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)
```
## Training Details
- Base Model: SFT model fine-tuned for instruction following
- LoRA Configuration: r=16, alpha=32, dropout=0.1
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, lm_head
- Training Data: MNLP M2 MCQA Dataset
## Performance
This merged model should provide better performance than the original LoRA adapter while being easier to deploy and use.
|
mirodavide/vlm-vqa
|
mirodavide
| 2025-06-10T18:17:31Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-17T22:43:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aymanbakiri/MNLP_M3_mcqa_dpo_model
|
aymanbakiri
| 2025-06-10T18:16:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mcqa",
"question-answering",
"dpo",
"lora",
"qwen",
"unsloth",
"generated_from_trainer",
"base_model:AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500",
"base_model:adapter:AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500",
"region:us"
] |
question-answering
| 2025-06-10T16:34:25Z |
---
library_name: peft
base_model: AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500
tags:
- mcqa
- question-answering
- dpo
- lora
- qwen
- unsloth
- generated_from_trainer
model-index:
- name: MNLP_M3_mcqa_dpo_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M3_mcqa_dpo_model
This model is a fine-tuned version of [AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500](https://huggingface.co/AnnaelleMyriam/MNLP_M3_sft_dpo_1024_beta0.5_2e-5_FINAL_v3_16_check1500) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.541 | 0.0811 | 150 | 0.4871 |
| 0.3978 | 0.1622 | 300 | 0.4650 |
| 0.4109 | 0.2433 | 450 | 0.4297 |
| 0.4848 | 0.3244 | 600 | 0.4074 |
| 0.4588 | 0.4055 | 750 | 0.3867 |
| 0.4039 | 0.4866 | 900 | 0.3828 |
| 0.3221 | 0.5677 | 1050 | 0.4007 |
| 0.3642 | 0.6488 | 1200 | 0.3854 |
| 0.3558 | 0.7299 | 1350 | 0.4022 |
| 0.3155 | 0.8110 | 1500 | 0.3775 |
| 0.4315 | 0.8921 | 1650 | 0.3692 |
| 0.3845 | 0.9732 | 1800 | 0.3586 |
| 0.4821 | 1.0541 | 1950 | 0.3639 |
| 0.3883 | 1.1352 | 2100 | 0.3683 |
| 0.3996 | 1.2163 | 2250 | 0.3670 |
| 0.4104 | 1.2974 | 2400 | 0.3365 |
| 0.4321 | 1.3785 | 2550 | 0.3496 |
| 0.3271 | 1.4596 | 2700 | 0.3394 |
| 0.3327 | 1.5407 | 2850 | 0.3544 |
| 0.2663 | 1.6218 | 3000 | 0.3632 |
| 0.5097 | 1.7029 | 3150 | 0.3435 |
| 0.4855 | 1.7840 | 3300 | 0.3344 |
| 0.1663 | 1.8651 | 3450 | 0.3521 |
| 0.3408 | 1.9462 | 3600 | 0.3551 |
| 0.2752 | 2.0270 | 3750 | 0.3448 |
| 0.4994 | 2.1081 | 3900 | 0.3552 |
| 0.4012 | 2.1892 | 4050 | 0.3537 |
| 0.1766 | 2.2703 | 4200 | 0.3596 |
| 0.3081 | 2.3514 | 4350 | 0.3584 |
| 0.2448 | 2.4325 | 4500 | 0.3595 |
| 0.3791 | 2.5137 | 4650 | 0.3547 |
| 0.3062 | 2.5948 | 4800 | 0.3501 |
| 0.2908 | 2.6759 | 4950 | 0.3472 |
| 0.3918 | 2.7570 | 5100 | 0.3470 |
| 0.3629 | 2.8381 | 5250 | 0.3479 |
| 0.2431 | 2.9192 | 5400 | 0.3487 |
| 0.1877 | 3.0 | 5550 | 0.3494 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mdeputy/windowz_ln_segment_051525
|
mdeputy
| 2025-06-10T18:14:45Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"Unet",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-16T23:07:55Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: windowz_ln_segment_051525
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# windowz_ln_segment_051525
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9734
- F1: 0.9770
- Iou: 0.9606
- Per Class Metrics: {0: {'f1': 0.99208, 'iou': 0.98429, 'accuracy': 0.98821}, 1: {'f1': 0.95742, 'iou': 0.91831, 'accuracy': 0.97945}, 2: {'f1': 0.27975, 'iou': 0.16263, 'accuracy': 0.97907}}
- Loss: 0.5105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | | Class Metrics | Validation Loss |
|:-------------:|:------:|:----:|:------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------:|
| No log | 0.0503 | 86 | 0.1155 | {0: {'f1': 1e-05, 'iou': 0.0, 'accuracy': 0.25189}, 1: {'f1': 0.64518, 'iou': 0.47621, 'accuracy': 0.86451}, 2: {'f1': 0.00676, 'iou': 0.00339, 'accuracy': 0.13587}} | 1.0017 |
| 1.2378 | 0.1006 | 172 | 0.0635 | {0: {'f1': 2e-05, 'iou': 1e-05, 'accuracy': 0.25181}, 1: {'f1': 0.41464, 'iou': 0.26154, 'accuracy': 0.81337}, 2: {'f1': 0.01135, 'iou': 0.00571, 'accuracy': 0.07762}} | 0.9440 |
| 1.2207 | 0.1510 | 258 | 0.0554 | {0: {'f1': 0.0, 'iou': 0.0, 'accuracy': 0.25189}, 1: {'f1': 0.37135, 'iou': 0.22801, 'accuracy': 0.80693}, 2: {'f1': 0.01383, 'iou': 0.00696, 'accuracy': 0.0683}} | 0.9237 |
| 1.1925 | 0.2013 | 344 | 0.0499 | {0: {'f1': 0.0, 'iou': 0.0, 'accuracy': 0.2519}, 1: {'f1': 0.34062, 'iou': 0.20527, 'accuracy': 0.80395}, 2: {'f1': 0.0169, 'iou': 0.00852, 'accuracy': 0.06157}} | 0.9326 |
| 1.1606 | 0.2516 | 430 | 0.0647 | {0: {'f1': 0.0, 'iou': 0.0, 'accuracy': 0.2519}, 1: {'f1': 0.42092, 'iou': 0.26656, 'accuracy': 0.81803}, 2: {'f1': 0.01824, 'iou': 0.0092, 'accuracy': 0.07944}} | 0.9245 |
| 1.1299 | 0.3019 | 516 | 0.1179 | {0: {'f1': 0.00055, 'iou': 0.00027, 'accuracy': 0.2521}, 1: {'f1': 0.65335, 'iou': 0.48517, 'accuracy': 0.87296}, 2: {'f1': 0.01684, 'iou': 0.00849, 'accuracy': 0.12971}} | 0.8751 |
| 1.0998 | 0.3523 | 602 | 0.1795 | {0: {'f1': 0.09312, 'iou': 0.04883, 'accuracy': 0.28836}, 1: {'f1': 0.74168, 'iou': 0.58942, 'accuracy': 0.89807}, 2: {'f1': 0.01625, 'iou': 0.00819, 'accuracy': 0.19265}} | 0.8243 |
| 1.0998 | 0.4026 | 688 | 0.8495 | {0: {'f1': 0.98464, 'iou': 0.96975, 'accuracy': 0.97734}, 1: {'f1': 0.67506, 'iou': 0.5095, 'accuracy': 0.879}, 2: {'f1': 0.10072, 'iou': 0.05303, 'accuracy': 0.86349}} | 0.8043 |
| 1.0696 | 0.4529 | 774 | 0.874 | {0: {'f1': 0.98422, 'iou': 0.96893, 'accuracy': 0.97671}, 1: {'f1': 0.75979, 'iou': 0.61262, 'accuracy': 0.90403}, 2: {'f1': 0.11786, 'iou': 0.06262, 'accuracy': 0.88981}} | 0.7506 |
| 1.0419 | 0.5032 | 860 | 0.9391 | {0: {'f1': 0.98895, 'iou': 0.97813, 'accuracy': 0.98362}, 1: {'f1': 0.9206, 'iou': 0.85288, 'accuracy': 0.96291}, 2: {'f1': 0.10984, 'iou': 0.05811, 'accuracy': 0.9543}} | 0.6618 |
| 1.0149 | 0.5535 | 946 | 0.7046 | {0: {'f1': 0.91333, 'iou': 0.84049, 'accuracy': 0.85914}, 1: {'f1': 0.47479, 'iou': 0.3113, 'accuracy': 0.83125}, 2: {'f1': 0.06969, 'iou': 0.0361, 'accuracy': 0.95034}} | 0.7884 |
| 1.0024 | 0.6039 | 1032 | 0.8763 | {0: {'f1': 0.97412, 'iou': 0.94954, 'accuracy': 0.96061}, 1: {'f1': 0.81027, 'iou': 0.68105, 'accuracy': 0.92142}, 2: {'f1': 0.15071, 'iou': 0.0815, 'accuracy': 0.94565}} | 0.5940 |
| 0.9739 | 0.6542 | 1118 | 0.2909 | {0: {'f1': 0.24044, 'iou': 0.13665, 'accuracy': 0.34611}, 1: {'f1': 0.87511, 'iou': 0.77796, 'accuracy': 0.94508}, 2: {'f1': 0.01577, 'iou': 0.00795, 'accuracy': 0.31169}} | 0.9626 |
| 0.9626 | 0.7045 | 1204 | 0.9520 | {0: {'f1': 0.99067, 'iou': 0.98152, 'accuracy': 0.98615}, 1: {'f1': 0.94406, 'iou': 0.89405, 'accuracy': 0.97333}, 2: {'f1': 0.17721, 'iou': 0.09722, 'accuracy': 0.96864}} | 0.5658 |
| 0.9626 | 0.7548 | 1290 | 0.9183 | {0: {'f1': 0.98633, 'iou': 0.97303, 'accuracy': 0.97941}, 1: {'f1': 0.87697, 'iou': 0.78089, 'accuracy': 0.94587}, 2: {'f1': 0.18891, 'iou': 0.1043, 'accuracy': 0.95606}} | 0.6192 |
| 0.9515 | 0.8051 | 1376 | 0.3322 | {0: {'f1': 0.25117, 'iou': 0.14362, 'accuracy': 0.35705}, 1: {'f1': 0.96192, 'iou': 0.92663, 'accuracy': 0.98155}, 2: {'f1': 0.00532, 'iou': 0.00267, 'accuracy': 0.34648}} | 0.9133 |
| 0.9395 | 0.8555 | 1462 | 0.9488 | {0: {'f1': 0.99219, 'iou': 0.98451, 'accuracy': 0.98835}, 1: {'f1': 0.93039, 'iou': 0.86984, 'accuracy': 0.96766}, 2: {'f1': 0.25682, 'iou': 0.14733, 'accuracy': 0.96767}} | 0.5446 |
| 0.9288 | 0.9058 | 1548 | 0.9606 | {0: {'f1': 0.99208, 'iou': 0.98429, 'accuracy': 0.98821}, 1: {'f1': 0.95742, 'iou': 0.91831, 'accuracy': 0.97945}, 2: {'f1': 0.27975, 'iou': 0.16263, 'accuracy': 0.97907}} | 0.5105 |
| 0.9341 | 0.9561 | 1634 | 0.9536 | {0: {'f1': 0.98841, 'iou': 0.97708, 'accuracy': 0.98281}, 1: {'f1': 0.95511, 'iou': 0.91408, 'accuracy': 0.97815}, 2: {'f1': 0.18082, 'iou': 0.0994, 'accuracy': 0.97498}} | 0.5982 |
### Framework versions
- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.20.3
|
reinattwijaya/Qwen3-0.6B-final-gptq-smoothquant-w8a8-ultrachat
|
reinattwijaya
| 2025-06-10T18:14:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-06-10T18:13:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
clejordan/MNLP_M3_quantized_model
|
clejordan
| 2025-06-10T18:14:07Z | 194 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-05-30T16:41:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RioJune/AG-KD
|
RioJune
| 2025-06-10T18:13:38Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"vision-language",
"abnormality-grounding",
"medical-imaging",
"knowledge-distillation",
"multimodal",
"image-text-to-text",
"custom_code",
"arxiv:2503.03278",
"base_model:microsoft/Florence-2-base-ft",
"base_model:finetune:microsoft/Florence-2-base-ft",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-05T17:08:38Z |
---
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- microsoft/Florence-2-base-ft
license: apache-2.0
tags:
- vision-language
- abnormality-grounding
- medical-imaging
- knowledge-distillation
- multimodal
model-index:
- name: AG-KD
results:
- task:
type: Abnormality Grounding
name: Grounding
metrics:
- name: none
type: none
value: null
---
# 🚀 Enhancing Abnormality Grounding for Vision-Language Models with Knowledge Descriptions
This repository provides the code and model weights for our paper:
**[Enhancing Abnormality Grounding for Vision-Language Models with Knowledge Descriptions](https://arxiv.org/abs/2503.03278)**
🧪 Explore our live demo on [Hugging Face Spaces](https://huggingface.co/spaces/Anonymous-AC/AG-KD-anonymous-Demo) to see the model in action!
## 📌 Overview
**AG-KD (Abnormality Grounding with Knowledge Descriptions)** is a compact 0.23B vision-language model designed for abnormality grounding in medical images. Despite its small size, it delivers performance **comparable to 7B state-of-the-art medical VLMs**. Our approach integrates **structured knowledge descriptions** into prompts, enhancing the model’s ability to localize medical abnormalities in images.
## 💻 How to Use
### Simple Example
For detailed examples, visit: [AG-KD GitHub Repository](https://github.com/LijunRio/AG-KD)
```python
import torch
import requests
from io import BytesIO
from PIL import Image
import numpy as np
import albumentations as A
from transformers import AutoModelForCausalLM, AutoProcessor
def apply_transform(image, size=512):
transform = A.Compose([
A.LongestMaxSize(max_size=size),
A.PadIfNeeded(min_height=size, min_width=size, border_mode=0, value=(0,0,0)),
A.Resize(height=size, width=size)
])
return transform(image=np.array(image))["image"]
def run_simple(image_url, target, definition, model, processor, device):
prompt = f"<CAPTION_TO_PHRASE_GROUNDING>Locate the phrases in the caption: {target} means {definition}."
response = requests.get(image_url)
image = Image.open(BytesIO(response.content)).convert("RGB")
np_image = apply_transform(image)
inputs = processor(text=[prompt], images=[np_image], return_tensors="pt", padding=True).to(device)
outputs = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3,
output_scores=True,
return_dict_in_generate=True
)
transition_scores = model.compute_transition_scores(outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False)
generated_text = processor.batch_decode(outputs.sequences, skip_special_tokens=False)[0]
output_len = np.sum(transition_scores.cpu().numpy() < 0, axis=1)
length_penalty = model.generation_config.length_penalty
score = transition_scores.cpu().sum(axis=1) / (output_len**length_penalty)
prob = np.exp(score.cpu().numpy())
print(f"\n[IMAGE URL] {image_url}")
print(f"[TARGET] {target}")
print(f"[PROBABILITY] {prob[0] * 100:.2f}%")
print(f"[GENERATED TEXT]\n{generated_text}")
if __name__ == "__main__":
image_url = "https://huggingface.co/spaces/RioJune/AG-KD/resolve/main/examples/f1eb2216d773ced6330b1f31e18f04f8.png"
target = "pulmonary fibrosis"
definition = "Scarring of the lung tissue creating a dense fibrous appearance."
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name = "RioJune/AG-KD"
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True).to(device)
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
run_simple(image_url, target, definition, model, processor, device)
```
## 📖 Citation
If you use our work, please cite:
```
@article{li2025enhancing,
title={Enhancing Abnormality Grounding for Vision Language Models with Knowledge Descriptions},
author={Li, J. and Liu, C. and Bai, W. and Arcucci, R. and Bercea, C. I. and Schnabel, J. A.},
journal={arXiv preprint arXiv:2503.03278},
year={2025}
}
```
|
MinaMila/llama_instbase_unlearned_ug2_e-6_1.0_0.5_0.25_0.25_ep2_LoRa_ACSEmployment_2_cfda_ep1_22
|
MinaMila
| 2025-06-10T18:10:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T18:10:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mohibrehman31/custom-head-men-gemma-2-2b-p2
|
Mohibrehman31
| 2025-06-10T18:08:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"arxiv:1910.09700",
"base_model:google/gemma-2-2b",
"base_model:adapter:google/gemma-2-2b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-10T18:08:40Z |
---
base_model: google/gemma-2-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
Torkildef/SAM2_fintuned_soccerfield
|
Torkildef
| 2025-06-10T18:08:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-10T18:02:39Z |
---
license: apache-2.0
---
|
Thimphou/MNLP_M3_mcqa_model
|
Thimphou
| 2025-06-10T18:05:56Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"fine-tuned",
"multiple-choice-qa",
"mcqa",
"question-answering",
"text-generation",
"conversational",
"en",
"dataset:custom-mcqa-dataset",
"base_model:Thimphou/MNLP_M3_SFT_code_5percent",
"base_model:finetune:Thimphou/MNLP_M3_SFT_code_5percent",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-10T18:03:58Z |
---
license: apache-2.0
base_model: Thimphou/MNLP_M3_SFT_code_5percent
tags:
- fine-tuned
- multiple-choice-qa
- mcqa
- question-answering
datasets:
- custom-mcqa-dataset
language:
- en
pipeline_tag: text-generation
---
# MNLP_M3_mcqa_model
This model is a fine-tuned version of [Thimphou/MNLP_M3_SFT_code_5percent](https://huggingface.co/Thimphou/MNLP_M3_SFT_code_5percent) for Multiple Choice Question Answering (MCQA) tasks.
## Model Details
- **Base Model**: Thimphou/MNLP_M3_SFT_code_5percent
- **Task**: Multiple Choice Question Answering
- **Model Type**: Classic
- **Training Context**: With context
- **Evaluation Context**: Without context
- **Fine-tuning Method**: Causal Language Modeling
## Training Details
- **Epochs**: 3
- **Learning Rate**: 5e-05
- **Batch Size**: 2
- **Training Framework**: Transformers + PyTorch
## Performance
| Metric | Baseline | Fine-tuned | Improvement |
|--------|----------|------------|-------------|
| Accuracy | 48.00% | 54.00% | +6.00% |
## Training Data
The model was fine-tuned on a custom MCQA dataset with the following characteristics:
- Format: Multiple choice questions with 4 options (A, B, C, D)
- Context: Included during training
- Evaluation: Without context
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MNLP_M3_mcqa_model", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("MNLP_M3_mcqa_model", trust_remote_code=True)
# For MCQA tasks, provide the question and options, then generate the answer
prompt = "Question: What is the capital of France?\nA) London\nB) Berlin\nC) Paris\nD) Madrid\nAnswer:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=5)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
|
sayantan0013/rubi_DPO_ramp_phase_2
|
sayantan0013
| 2025-06-10T18:05:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:sayantan0013/rubi_DPO_phase_1",
"base_model:finetune:sayantan0013/rubi_DPO_phase_1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T15:56:53Z |
---
base_model: sayantan0013/rubi_DPO_phase_1
library_name: transformers
model_name: rubi_DPO_ramp_phase_2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for rubi_DPO_ramp_phase_2
This model is a fine-tuned version of [sayantan0013/rubi_DPO_phase_1](https://huggingface.co/sayantan0013/rubi_DPO_phase_1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sayantan0013/rubi_DPO_ramp_phase_2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sayantan0013-epfl/huggingface/runs/4jay6rou)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
clejordan/MNLP_M3_bnbquantized_test
|
clejordan
| 2025-06-10T18:05:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-10T18:04:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eylulipci/MNLP_M3_dpo_model
|
eylulipci
| 2025-06-10T18:03:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T18:01:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ivi1523/ppo-Huggy
|
ivi1523
| 2025-06-10T18:02:50Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-06-10T18:02:43Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ivi1523/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
entfane/qwen2.5-0.5B-algebra-dpo-communication-based
|
entfane
| 2025-06-10T18:02:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T18:01:16Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AitorDL/MNLP_DPO_HS_lr2e-5_b04_Preferences_lr_2e-5_b03
|
AitorDL
| 2025-06-10T18:01:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T18:00:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmbi975xp092gkfxsjob5msli_cmbqrvu2f02eeh4x5x839rvbh
|
BootesVoid
| 2025-06-10T18:01:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-10T18:01:43Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JULKA
---
# Cmbi975Xp092Gkfxsjob5Msli_Cmbqrvu2F02Eeh4X5X839Rvbh
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JULKA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "JULKA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbi975xp092gkfxsjob5msli_cmbqrvu2f02eeh4x5x839rvbh/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbi975xp092gkfxsjob5msli_cmbqrvu2f02eeh4x5x839rvbh', weight_name='lora.safetensors')
image = pipeline('JULKA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbi975xp092gkfxsjob5msli_cmbqrvu2f02eeh4x5x839rvbh/discussions) to add images that show off what you’ve made with this LoRA.
|
AhChat/my_awesome_opus_books_model
|
AhChat
| 2025-06-10T17:59:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-10T16:38:28Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5715
- Bleu: 6.4721
- Gen Len: 18.3072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8277 | 1.0 | 12709 | 1.5972 | 6.261 | 18.3186 |
| 1.7904 | 2.0 | 25418 | 1.5715 | 6.4721 | 18.3072 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
bola23/outputs
|
bola23
| 2025-06-10T17:57:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T17:57:07Z |
---
base_model: meta-llama/Llama-3.1-8B
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bola23/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
thomaszimmerer/thomas-lora-2
|
thomaszimmerer
| 2025-06-10T17:56:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-10T16:37:33Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Thomas Lora 2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/thomaszimmerer/thomas-lora-2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('thomaszimmerer/thomas-lora-2', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/thomaszimmerer/thomas-lora-2/discussions) to add images that show off what you’ve made with this LoRA.
|
pgilliar/MNLP_M3_document_encoder
|
pgilliar
| 2025-06-10T17:54:39Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"rust",
"onnx",
"safetensors",
"openvino",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-10T17:53:59Z |
---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
skykeefe/idkanymorelol
|
skykeefe
| 2025-06-10T17:54:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-10T17:54:35Z |
---
license: apache-2.0
---
|
ntgiaky/phobert-intent-classifier-smart-home
|
ntgiaky
| 2025-06-10T17:54:15Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"roberta",
"intent-classification",
"smart-home",
"vietnamese",
"phobert",
"vi",
"dataset:custom-vn-slu-augmented",
"license:mit",
"model-index",
"region:us"
] | null | 2025-06-10T16:41:27Z |
---
language: vi
tags:
- intent-classification
- smart-home
- vietnamese
- phobert
license: mit
datasets:
- custom-vn-slu-augmented
metrics:
- accuracy
- f1
model-index:
- name: PhoBERT Intent Classifier for Vietnamese Smart Home
results:
- task:
type: text-classification
name: Intent Classification
dataset:
name: VN-SLU Augmented Dataset
type: custom
metrics:
- type: accuracy
value: 98.3
name: Accuracy
- type: f1
value: 97.72
name: F1 Score (Weighted)
- type: f1
value: 71.90
name: F1 Score (Macro)
widget:
- text: "bật đèn phòng khách"
- text: "tắt quạt phòng ngủ lúc 10 giờ tối"
- text: "kiểm tra tình trạng điều hòa"
- text: "tăng độ sáng đèn bàn"
- text: "mở cửa chính"
---
# PhoBERT Fine-tuned for Vietnamese Smart Home Intent Classification
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) specifically trained for intent classification in Vietnamese smart home commands.
## Model Description
- **Base Model**: vinai/phobert-base
- **Task**: Intent Classification for Smart Home Commands
- **Language**: Vietnamese
- **Training Data**: VN-SLU Augmented Dataset (4,000 training samples)
- **Number of Intent Classes**: 13
## Intended Uses & Limitations
### Intended Uses
- Classifying user intents in Vietnamese smart home voice commands
- Integration with voice assistants for home automation
- Research in Vietnamese NLP for IoT applications
### Limitations
- Optimized specifically for smart home domain
- May not generalize well to other domains
- Trained on Vietnamese language only
## Intent Classes
The model can classify the following 13 intents:
1. `bật thiết bị` (turn on device)
2. `tắt thiết bị` (turn off device)
3. `mở thiết bị` (open device)
4. `đóng thiết bị` (close device)
5. `tăng độ sáng của thiết bị` (increase device brightness)
6. `giảm độ sáng của thiết bị` (decrease device brightness)
7. `kiểm tra tình trạng thiết bị` (check device status)
8. `điều chỉnh nhiệt độ` (adjust temperature)
9. `hẹn giờ` (set timer)
10. `kích hoạt cảnh` (activate scene)
11. `tắt tất cả thiết bị` (turn off all devices)
12. `mở khóa` (unlock)
13. `khóa` (lock)
## How to Use
### Using Transformers Library
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import pickle
# Load model and tokenizer
model_name = "ntgiaky/phobert-intent-classifier-smart-home"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Load label encoder
with open('intent_encoder.pkl', 'rb') as f:
label_encoder = pickle.load(f)
# Predict intent
def predict_intent(text):
# Tokenize
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Predict
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1)
# Decode label
intent = label_encoder.inverse_transform(predicted_class.cpu().numpy())[0]
confidence = predictions[0][predicted_class].item()
return intent, confidence
# Example usage
text = "bật đèn phòng khách"
intent, confidence = predict_intent(text)
print(f"Intent: {intent}, Confidence: {confidence:.2f}")
```
### Using Pipeline
```python
from transformers import pipeline
# Load pipeline
classifier = pipeline(
"text-classification",
model="ntgiaky/phobert-intent-classifier-smart-home",
device=0 # Use -1 for CPU
)
# Predict
result = classifier("tắt quạt phòng ngủ")
print(result)
```
## Integration Example
```python
# For Raspberry Pi deployment
import onnxruntime as ort
import numpy as np
# Convert to ONNX first (one-time)
from transformers import AutoModel
model = AutoModel.from_pretrained("ntgiaky/phobert-intent-classifier-smart-home")
# ... ONNX conversion code ...
# Then use ONNX Runtime for inference
session = ort.InferenceSession("model.onnx")
# ... inference code ...
```
## Citation
If you use this model, please cite:
```bibtex
@misc{phobert-smart-home-2025,
author = {Trần Quang Huy and Nguyễn Trần Gia Kỳ},
title = {PhoBERT Fine-tuned for Vietnamese Smart Home Intent Classification},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {\url{https://huggingface.co/ntgiaky/phobert-intent-classifier-smart-home}}
}
```
## Authors
- **Trần Quang Huy**
- **Nguyễn Trần Gia Kỳ**
- **Advisor**: TS. Đoàn Duy
## License
This model is released under the MIT License.
## Contact
For questions or issues, please open an issue on the [model repository](https://huggingface.co/ntgiaky/phobert-intent-classifier-smart-home) or contact the authors through the university.
|
AyakoMizuki/Mia
|
AyakoMizuki
| 2025-06-10T17:54:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-10T17:38:46Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Mia
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Mia
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Mia` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
darkshapes/Flux-1.lite-8B-MLX-Q8
|
darkshapes
| 2025-06-10T17:52:39Z | 0 | 0 | null |
[
"text-to-image",
"en",
"dataset:freepik",
"base_model:Freepik/flux.1-lite-8B",
"base_model:finetune:Freepik/flux.1-lite-8B",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-10T16:23:20Z |
---
name: flux.1-lite-8B-MLX-Q8
base_model: Freepik/flux.1-lite-8B
license: other
pipeline_tag: text-to-image
tasks :
- text-to-image
- image-generation
language: en
datasets: freepik
funded_by: Fal
hardware_type: 64 H100s
cloud_region :
- Western Europe
- East/Midwstern US
cloud_provider: Nebius
hours_used: 2880
get_started_code: uvx --from mflux mflux-generate --base-model dev --steps 50 --guidance 4.0 --seed 10 --width 1024 --height 1024 -q 8 --model darkshapes/flux.1-lite-8B-MLX-Q8 --prompt 'Test Prompt' --prompt 'Test prompt'
---
> [!IMPORTANT]
> Original Model Link : [https://huggingface.co/Freepik/flux.1-lite-8B](https://huggingface.co/Freepik/flux.1-lite-8B)
>
```
name: flux.1-lite-8B-MLX-Q8
base_model: Freepik/flux.1-lite-8B
license: other
pipeline_tag: text-to-image
tasks :
- text-to-image
- image-generation
language: en
datasets: freepik
funded_by: Fal
hardware_type: 64 H100s
cloud_region :
- Western Europe
- East/Midwstern US
cloud_provider: Nebius
hours_used: 2880
get_started_code: uvx --from mflux mflux-generate --base-model dev --steps 50 --guidance 4.0 --width 1024 --height 1024 -q 8 --model darkshapes/flux.1-lite-8B-MLX-Q8 --prompt 'Test Prompt' --prompt 'Test prompt'
```
# flux.1-lite-8B
flux.1-lite-8B is an 8 billion parameter image synthesis model, a Diffusion Transformer derived from black-forest-labs/Flux.1-dev. The training data for this model originates from ~100k licensed stock images from Freepik.com. Separate self and cross attention layers, embedding extraction from intermediate T5 layers, and rotary positional embedding processes enabled an efficient model construction while maintaining quality performance. This model excels with illustrative and vector styles in particular.
> [!WARNING]
> MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)
> [!NOTE]
> Generation using uv https://docs.astral.sh/uv/**:
> ```
> uvx --from mflux mflux-generate --base-model dev --steps 50 --guidance 4.0 --seed 10 --width 1024 --height 1024 -q 8 --model darkshapes/flux.1-lite-8B-MLX-Q8 --prompt 'Test Prompt'
>```
> [!NOTE]
> Generation using pip:
> ```
> pipx --from mflux mflux-generate --base-model dev --steps 50 --guidance 4.0 --steps 4 --seed 10 --width 1024 --height 1024 -q 8 --model darkshapes/flux.1-lite-8B-MLX-Q8 --prompt 'Test Prompt'
> ```
|
YuchenLi01/genSoftQwen2.5MathRM72Bth0.5pair4NoGT_1.5B_dpo_ebs32_lr5e-07_beta1.5_epoch8.0_42
|
YuchenLi01
| 2025-06-10T17:51:56Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:YuchenLi01/MATH_Qwen2.5-1.5BInstruct_Soft_DPO_Qwen2.5MathRM72B_th0.5_pair4NoGT",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-09T19:22:27Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- YuchenLi01/MATH_Qwen2.5-1.5BInstruct_Soft_DPO_Qwen2.5MathRM72B_th0.5_pair4NoGT
model-index:
- name: genSoftQwen2.5MathRM72Bth0.5pair4NoGT_1.5B_dpo_ebs32_lr5e-07_beta1.5_epoch8.0_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# genSoftQwen2.5MathRM72Bth0.5pair4NoGT_1.5B_dpo_ebs32_lr5e-07_beta1.5_epoch8.0_42
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the YuchenLi01/MATH_Qwen2.5-1.5BInstruct_Soft_DPO_Qwen2.5MathRM72B_th0.5_pair4NoGT dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1130
- Rewards/chosen: -0.4448
- Rewards/rejected: -5.8140
- Rewards/accuracies: 0.7622
- Rewards/margins: 5.3692
- Logps/rejected: -51.3617
- Logps/chosen: -37.6017
- Logits/rejected: -2.4410
- Logits/chosen: -2.6351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7928 | 0.0287 | 20 | 0.7514 | -0.0078 | -0.0175 | 0.4878 | 0.0097 | -47.4973 | -37.3104 | -2.1647 | -2.3055 |
| 0.731 | 0.0573 | 40 | 0.7643 | -0.0306 | 0.0115 | 0.4756 | -0.0421 | -47.4780 | -37.3256 | -2.1637 | -2.3045 |
| 0.7496 | 0.0860 | 60 | 0.7537 | -0.0234 | -0.0266 | 0.4939 | 0.0031 | -47.5034 | -37.3208 | -2.1630 | -2.3030 |
| 0.7762 | 0.1146 | 80 | 0.7601 | 0.0130 | -0.0070 | 0.5366 | 0.0200 | -47.4903 | -37.2965 | -2.1612 | -2.3012 |
| 0.6473 | 0.1433 | 100 | 0.7580 | 0.0273 | -0.0027 | 0.5366 | 0.0300 | -47.4875 | -37.2870 | -2.1657 | -2.3065 |
| 0.7257 | 0.1719 | 120 | 0.7473 | -0.0380 | 0.0161 | 0.4329 | -0.0541 | -47.4749 | -37.3305 | -2.1671 | -2.3080 |
| 0.8268 | 0.2006 | 140 | 0.7495 | -0.0581 | -0.0707 | 0.4939 | 0.0126 | -47.5328 | -37.3439 | -2.1631 | -2.3039 |
| 0.7202 | 0.2292 | 160 | 0.7365 | -0.0103 | -0.1378 | 0.5549 | 0.1275 | -47.5775 | -37.3120 | -2.1625 | -2.3030 |
| 0.6835 | 0.2579 | 180 | 0.7315 | -0.0179 | -0.1123 | 0.5793 | 0.0945 | -47.5606 | -37.3171 | -2.1630 | -2.3034 |
| 0.7214 | 0.2865 | 200 | 0.7199 | 0.0580 | -0.0883 | 0.5732 | 0.1463 | -47.5445 | -37.2665 | -2.1617 | -2.3024 |
| 0.7425 | 0.3152 | 220 | 0.7150 | -0.0271 | -0.2070 | 0.5183 | 0.1800 | -47.6237 | -37.3232 | -2.1603 | -2.3014 |
| 0.8579 | 0.3438 | 240 | 0.7096 | -0.1333 | -0.3021 | 0.5549 | 0.1688 | -47.6871 | -37.3940 | -2.1521 | -2.2929 |
| 0.7338 | 0.3725 | 260 | 0.7003 | -0.1719 | -0.3421 | 0.6098 | 0.1702 | -47.7138 | -37.4198 | -2.1585 | -2.3011 |
| 0.7376 | 0.4011 | 280 | 0.6751 | -0.1860 | -0.3819 | 0.5854 | 0.1959 | -47.7403 | -37.4292 | -2.1537 | -2.2968 |
| 0.6339 | 0.4298 | 300 | 0.6785 | -0.2004 | -0.4422 | 0.6098 | 0.2418 | -47.7805 | -37.4388 | -2.1518 | -2.2945 |
| 0.6225 | 0.4585 | 320 | 0.6674 | -0.1704 | -0.4729 | 0.6463 | 0.3025 | -47.8010 | -37.4188 | -2.1541 | -2.2974 |
| 0.579 | 0.4871 | 340 | 0.6474 | -0.1687 | -0.4939 | 0.6402 | 0.3251 | -47.8149 | -37.4177 | -2.1566 | -2.3012 |
| 0.6451 | 0.5158 | 360 | 0.6410 | 0.0118 | -0.3711 | 0.6220 | 0.3829 | -47.7331 | -37.2973 | -2.1662 | -2.3121 |
| 0.6538 | 0.5444 | 380 | 0.6364 | -0.0017 | -0.4015 | 0.6220 | 0.3999 | -47.7534 | -37.3063 | -2.1613 | -2.3063 |
| 0.8197 | 0.5731 | 400 | 0.6280 | 0.2083 | -0.3317 | 0.6524 | 0.5400 | -47.7068 | -37.1663 | -2.1787 | -2.3252 |
| 0.7057 | 0.6017 | 420 | 0.6240 | 0.2479 | -0.3546 | 0.6890 | 0.6025 | -47.7221 | -37.1399 | -2.1832 | -2.3308 |
| 0.6017 | 0.6304 | 440 | 0.6256 | 0.2469 | -0.3383 | 0.6951 | 0.5852 | -47.7112 | -37.1406 | -2.1756 | -2.3217 |
| 0.5999 | 0.6590 | 460 | 0.6262 | 0.1867 | -0.4060 | 0.6951 | 0.5928 | -47.7564 | -37.1807 | -2.1749 | -2.3230 |
| 0.7107 | 0.6877 | 480 | 0.6079 | 0.2263 | -0.4225 | 0.6951 | 0.6488 | -47.7673 | -37.1543 | -2.1770 | -2.3259 |
| 0.6802 | 0.7163 | 500 | 0.6210 | 0.0788 | -0.5335 | 0.6829 | 0.6123 | -47.8414 | -37.2527 | -2.1671 | -2.3160 |
| 0.5559 | 0.7450 | 520 | 0.5969 | 0.1966 | -0.4291 | 0.6402 | 0.6256 | -47.7717 | -37.1741 | -2.1742 | -2.3240 |
| 0.4405 | 0.7736 | 540 | 0.6161 | 0.2357 | -0.3912 | 0.6280 | 0.6270 | -47.7465 | -37.1480 | -2.1724 | -2.3215 |
| 0.7618 | 0.8023 | 560 | 0.6139 | 0.0737 | -0.6811 | 0.6646 | 0.7548 | -47.9398 | -37.2561 | -2.1725 | -2.3245 |
| 0.6474 | 0.8309 | 580 | 0.6280 | -0.1066 | -0.9227 | 0.6585 | 0.8161 | -48.1008 | -37.3762 | -2.1516 | -2.3023 |
| 0.5268 | 0.8596 | 600 | 0.6122 | 0.0132 | -0.7545 | 0.6585 | 0.7677 | -47.9887 | -37.2964 | -2.1594 | -2.3114 |
| 0.6057 | 0.8883 | 620 | 0.6149 | 0.2543 | -0.5146 | 0.6220 | 0.7689 | -47.8287 | -37.1356 | -2.1755 | -2.3278 |
| 0.7815 | 0.9169 | 640 | 0.5993 | 0.5243 | -0.3617 | 0.6951 | 0.8860 | -47.7268 | -36.9557 | -2.1886 | -2.3420 |
| 0.6002 | 0.9456 | 660 | 0.6021 | 0.3025 | -0.5777 | 0.6829 | 0.8802 | -47.8708 | -37.1035 | -2.1748 | -2.3288 |
| 0.4714 | 0.9742 | 680 | 0.6061 | 0.3065 | -0.5966 | 0.6829 | 0.9031 | -47.8834 | -37.1009 | -2.1779 | -2.3328 |
| 0.4473 | 1.0029 | 700 | 0.5906 | 0.1890 | -0.7219 | 0.7012 | 0.9109 | -47.9670 | -37.1792 | -2.1747 | -2.3296 |
| 0.6104 | 1.0315 | 720 | 0.5983 | 0.2820 | -0.6438 | 0.6707 | 0.9257 | -47.9149 | -37.1172 | -2.1731 | -2.3283 |
| 0.247 | 1.0602 | 740 | 0.6153 | 0.2211 | -0.7089 | 0.6768 | 0.9300 | -47.9583 | -37.1578 | -2.1696 | -2.3247 |
| 0.2829 | 1.0888 | 760 | 0.5967 | 0.5021 | -0.5021 | 0.7073 | 1.0042 | -47.8204 | -36.9704 | -2.1858 | -2.3419 |
| 0.7007 | 1.1175 | 780 | 0.5845 | 0.4472 | -0.6834 | 0.7012 | 1.1306 | -47.9413 | -37.0070 | -2.1841 | -2.3406 |
| 0.4195 | 1.1461 | 800 | 0.5889 | 0.3344 | -0.7400 | 0.6646 | 1.0743 | -47.9790 | -37.0823 | -2.1829 | -2.3402 |
| 0.5674 | 1.1748 | 820 | 0.6098 | 0.1958 | -0.8943 | 0.6890 | 1.0901 | -48.0819 | -37.1747 | -2.1825 | -2.3397 |
| 0.3761 | 1.2034 | 840 | 0.5986 | 0.0432 | -0.9917 | 0.6585 | 1.0348 | -48.1468 | -37.2764 | -2.1704 | -2.3282 |
| 0.3679 | 1.2321 | 860 | 0.6030 | 0.0566 | -1.0274 | 0.6585 | 1.0841 | -48.1706 | -37.2674 | -2.1725 | -2.3296 |
| 0.5377 | 1.2607 | 880 | 0.6021 | 0.1168 | -0.9839 | 0.6768 | 1.1007 | -48.1416 | -37.2273 | -2.1751 | -2.3331 |
| 0.3405 | 1.2894 | 900 | 0.6064 | 0.2970 | -0.8893 | 0.7256 | 1.1864 | -48.0786 | -37.1072 | -2.1883 | -2.3479 |
| 0.1659 | 1.3181 | 920 | 0.6314 | 0.1824 | -0.8912 | 0.6524 | 1.0736 | -48.0798 | -37.1836 | -2.1828 | -2.3415 |
| 0.4237 | 1.3467 | 940 | 0.6102 | 0.2663 | -0.9341 | 0.7134 | 1.2004 | -48.1084 | -37.1276 | -2.1804 | -2.3382 |
| 0.3987 | 1.3754 | 960 | 0.6131 | 0.4488 | -0.8216 | 0.6768 | 1.2705 | -48.0334 | -37.0060 | -2.1947 | -2.3527 |
| 0.2087 | 1.4040 | 980 | 0.5906 | 0.4733 | -0.9328 | 0.7256 | 1.4061 | -48.1076 | -36.9896 | -2.2024 | -2.3630 |
| 0.3061 | 1.4327 | 1000 | 0.6170 | 0.1882 | -1.0420 | 0.7012 | 1.2302 | -48.1803 | -37.1797 | -2.1883 | -2.3474 |
| 0.2672 | 1.4613 | 1020 | 0.6116 | 0.1180 | -1.1792 | 0.7012 | 1.2972 | -48.2719 | -37.2265 | -2.1890 | -2.3493 |
| 0.3562 | 1.4900 | 1040 | 0.6018 | 0.2854 | -1.0265 | 0.6951 | 1.3119 | -48.1700 | -37.1149 | -2.2012 | -2.3613 |
| 0.3363 | 1.5186 | 1060 | 0.6195 | 0.3778 | -0.8765 | 0.6768 | 1.2543 | -48.0700 | -37.0533 | -2.2075 | -2.3676 |
| 0.3844 | 1.5473 | 1080 | 0.6089 | 0.1947 | -1.1531 | 0.7195 | 1.3478 | -48.2544 | -37.1754 | -2.1980 | -2.3588 |
| 0.4012 | 1.5759 | 1100 | 0.5956 | 0.1228 | -1.1446 | 0.6951 | 1.2675 | -48.2488 | -37.2233 | -2.1951 | -2.3560 |
| 0.327 | 1.6046 | 1120 | 0.6081 | 0.2826 | -1.0557 | 0.6646 | 1.3384 | -48.1895 | -37.1168 | -2.1960 | -2.3551 |
| 0.2342 | 1.6332 | 1140 | 0.6209 | 0.0370 | -1.3406 | 0.7073 | 1.3776 | -48.3794 | -37.2805 | -2.1911 | -2.3520 |
| 0.2669 | 1.6619 | 1160 | 0.6203 | 0.2179 | -1.2507 | 0.7012 | 1.4686 | -48.3195 | -37.1599 | -2.2000 | -2.3610 |
| 0.6199 | 1.6905 | 1180 | 0.6189 | 0.4441 | -0.9714 | 0.7073 | 1.4155 | -48.1333 | -37.0091 | -2.2058 | -2.3658 |
| 0.3065 | 1.7192 | 1200 | 0.6199 | 0.0421 | -1.3445 | 0.7073 | 1.3866 | -48.3820 | -37.2771 | -2.1932 | -2.3535 |
| 0.4585 | 1.7479 | 1220 | 0.6250 | 0.1468 | -1.1685 | 0.7012 | 1.3153 | -48.2647 | -37.2073 | -2.1939 | -2.3526 |
| 0.1956 | 1.7765 | 1240 | 0.6335 | 0.0356 | -1.3259 | 0.6951 | 1.3614 | -48.3696 | -37.2815 | -2.1912 | -2.3505 |
| 0.3968 | 1.8052 | 1260 | 0.6243 | -0.0324 | -1.3457 | 0.6829 | 1.3133 | -48.3828 | -37.3268 | -2.1853 | -2.3448 |
| 0.3171 | 1.8338 | 1280 | 0.6281 | 0.0291 | -1.3772 | 0.7195 | 1.4063 | -48.4038 | -37.2858 | -2.1849 | -2.3447 |
| 0.3697 | 1.8625 | 1300 | 0.6468 | -0.4073 | -1.8934 | 0.6951 | 1.4860 | -48.7479 | -37.5767 | -2.1688 | -2.3294 |
| 0.6919 | 1.8911 | 1320 | 0.6367 | 0.0682 | -1.3252 | 0.6890 | 1.3934 | -48.3691 | -37.2597 | -2.1928 | -2.3533 |
| 0.1672 | 1.9198 | 1340 | 0.6425 | 0.0594 | -1.2842 | 0.6585 | 1.3436 | -48.3418 | -37.2656 | -2.1903 | -2.3514 |
| 0.6577 | 1.9484 | 1360 | 0.6240 | 0.0020 | -1.4207 | 0.7195 | 1.4227 | -48.4328 | -37.3038 | -2.1917 | -2.3529 |
| 0.5993 | 1.9771 | 1380 | 0.6220 | 0.1529 | -1.2681 | 0.6951 | 1.4210 | -48.3311 | -37.2033 | -2.1972 | -2.3576 |
| 0.0986 | 2.0057 | 1400 | 0.6307 | 0.2365 | -1.1349 | 0.6585 | 1.3714 | -48.2423 | -37.1475 | -2.2010 | -2.3613 |
| 0.0338 | 2.0344 | 1420 | 0.6335 | -0.2288 | -1.7656 | 0.6951 | 1.5367 | -48.6627 | -37.4577 | -2.1787 | -2.3406 |
| 0.063 | 2.0630 | 1440 | 0.6207 | -0.2477 | -1.8723 | 0.7195 | 1.6246 | -48.7339 | -37.4703 | -2.1821 | -2.3447 |
| 0.0372 | 2.0917 | 1460 | 0.6302 | 0.3317 | -1.3680 | 0.7012 | 1.6997 | -48.3977 | -37.0840 | -2.2163 | -2.3790 |
| 0.0561 | 2.1203 | 1480 | 0.6379 | 0.0407 | -1.7830 | 0.7622 | 1.8238 | -48.6744 | -37.2780 | -2.1988 | -2.3621 |
| 0.0777 | 2.1490 | 1500 | 0.6513 | 0.1493 | -1.7175 | 0.75 | 1.8668 | -48.6307 | -37.2057 | -2.2097 | -2.3739 |
| 0.0742 | 2.1777 | 1520 | 0.6372 | 0.3690 | -1.5756 | 0.7134 | 1.9446 | -48.5361 | -37.0592 | -2.2211 | -2.3851 |
| 0.1014 | 2.2063 | 1540 | 0.6605 | -0.0006 | -1.8407 | 0.7012 | 1.8401 | -48.7128 | -37.3056 | -2.2063 | -2.3723 |
| 0.0559 | 2.2350 | 1560 | 0.6647 | -0.0007 | -1.9446 | 0.7134 | 1.9439 | -48.7821 | -37.3057 | -2.2099 | -2.3759 |
| 0.2224 | 2.2636 | 1580 | 0.6734 | 0.1503 | -1.7687 | 0.7073 | 1.9189 | -48.6648 | -37.2050 | -2.2196 | -2.3861 |
| 0.0762 | 2.2923 | 1600 | 0.6678 | 0.2905 | -1.6954 | 0.6951 | 1.9859 | -48.6160 | -37.1116 | -2.2357 | -2.4033 |
| 0.2452 | 2.3209 | 1620 | 0.6795 | 0.4962 | -1.5076 | 0.7317 | 2.0038 | -48.4907 | -36.9744 | -2.2440 | -2.4112 |
| 0.0354 | 2.3496 | 1640 | 0.6843 | 0.4023 | -1.6671 | 0.7134 | 2.0693 | -48.5971 | -37.0370 | -2.2475 | -2.4154 |
| 0.1987 | 2.3782 | 1660 | 0.6911 | 0.4904 | -1.6030 | 0.7073 | 2.0934 | -48.5544 | -36.9782 | -2.2530 | -2.4212 |
| 0.1498 | 2.4069 | 1680 | 0.7020 | 0.2536 | -1.8520 | 0.7195 | 2.1056 | -48.7204 | -37.1361 | -2.2406 | -2.4096 |
| 0.0617 | 2.4355 | 1700 | 0.7078 | 0.1642 | -2.0151 | 0.7012 | 2.1793 | -48.8291 | -37.1957 | -2.2341 | -2.4027 |
| 0.0721 | 2.4642 | 1720 | 0.7118 | -0.0325 | -2.0697 | 0.6707 | 2.0372 | -48.8655 | -37.3268 | -2.2306 | -2.3997 |
| 0.0521 | 2.4928 | 1740 | 0.7055 | -0.3331 | -2.4476 | 0.7073 | 2.1145 | -49.1174 | -37.5273 | -2.2059 | -2.3751 |
| 0.055 | 2.5215 | 1760 | 0.6997 | -0.3667 | -2.5408 | 0.7134 | 2.1741 | -49.1796 | -37.5496 | -2.2166 | -2.3868 |
| 0.0903 | 2.5501 | 1780 | 0.7113 | -0.2858 | -2.4486 | 0.7439 | 2.1628 | -49.1181 | -37.4957 | -2.2138 | -2.3836 |
| 0.0453 | 2.5788 | 1800 | 0.7051 | 0.2785 | -1.9874 | 0.75 | 2.2659 | -48.8106 | -37.1195 | -2.2511 | -2.4217 |
| 0.1064 | 2.6074 | 1820 | 0.7095 | 0.2406 | -1.9493 | 0.7439 | 2.1899 | -48.7852 | -37.1448 | -2.2445 | -2.4146 |
| 0.0386 | 2.6361 | 1840 | 0.7283 | -0.0103 | -2.1669 | 0.7439 | 2.1567 | -48.9303 | -37.3120 | -2.2272 | -2.3986 |
| 0.3008 | 2.6648 | 1860 | 0.7347 | -0.1451 | -2.3757 | 0.7378 | 2.2306 | -49.0695 | -37.4019 | -2.2140 | -2.3856 |
| 0.0656 | 2.6934 | 1880 | 0.7283 | 0.2229 | -2.0668 | 0.75 | 2.2897 | -48.8635 | -37.1566 | -2.2324 | -2.4039 |
| 0.0723 | 2.7221 | 1900 | 0.7282 | -0.1779 | -2.5250 | 0.7195 | 2.3471 | -49.1690 | -37.4238 | -2.2098 | -2.3811 |
| 0.1359 | 2.7507 | 1920 | 0.7192 | -0.0838 | -2.4250 | 0.75 | 2.3412 | -49.1024 | -37.3610 | -2.2228 | -2.3953 |
| 0.1302 | 2.7794 | 1940 | 0.7274 | -0.0127 | -2.4967 | 0.75 | 2.4840 | -49.1501 | -37.3137 | -2.2322 | -2.4044 |
| 0.1229 | 2.8080 | 1960 | 0.7108 | 0.0211 | -2.4790 | 0.7439 | 2.5001 | -49.1383 | -37.2911 | -2.2375 | -2.4091 |
| 0.1202 | 2.8367 | 1980 | 0.7250 | 0.1294 | -2.2904 | 0.7561 | 2.4199 | -49.0126 | -37.2189 | -2.2409 | -2.4113 |
| 0.057 | 2.8653 | 2000 | 0.7071 | 0.2988 | -2.1859 | 0.7439 | 2.4847 | -48.9429 | -37.1060 | -2.2570 | -2.4290 |
| 0.0959 | 2.8940 | 2020 | 0.7210 | 0.3411 | -2.0084 | 0.7561 | 2.3494 | -48.8246 | -37.0778 | -2.2583 | -2.4303 |
| 0.2817 | 2.9226 | 2040 | 0.7216 | 0.1300 | -2.2967 | 0.7378 | 2.4266 | -49.0168 | -37.2185 | -2.2406 | -2.4125 |
| 0.6541 | 2.9513 | 2060 | 0.7282 | 0.0017 | -2.4450 | 0.7378 | 2.4468 | -49.1157 | -37.3041 | -2.2387 | -2.4101 |
| 0.042 | 2.9799 | 2080 | 0.7077 | 0.0490 | -2.5035 | 0.7439 | 2.5525 | -49.1547 | -37.2725 | -2.2465 | -2.4188 |
| 0.0089 | 3.0086 | 2100 | 0.7133 | -0.0381 | -2.5231 | 0.75 | 2.4850 | -49.1677 | -37.3306 | -2.2352 | -2.4069 |
| 0.0721 | 3.0372 | 2120 | 0.7310 | -0.1357 | -2.6555 | 0.7195 | 2.5197 | -49.2560 | -37.3957 | -2.2350 | -2.4075 |
| 0.0125 | 3.0659 | 2140 | 0.7289 | 0.1899 | -2.4457 | 0.7622 | 2.6356 | -49.1161 | -37.1786 | -2.2545 | -2.4279 |
| 0.0297 | 3.0946 | 2160 | 0.7484 | 0.0728 | -2.6560 | 0.7439 | 2.7288 | -49.2563 | -37.2566 | -2.2537 | -2.4276 |
| 0.0237 | 3.1232 | 2180 | 0.7679 | 0.2225 | -2.5259 | 0.7378 | 2.7484 | -49.1696 | -37.1568 | -2.2724 | -2.4466 |
| 0.0084 | 3.1519 | 2200 | 0.7743 | 0.0334 | -2.7988 | 0.7317 | 2.8321 | -49.3515 | -37.2830 | -2.2704 | -2.4455 |
| 0.0071 | 3.1805 | 2220 | 0.7777 | 0.1805 | -2.7171 | 0.7622 | 2.8976 | -49.2971 | -37.1849 | -2.2821 | -2.4573 |
| 0.019 | 3.2092 | 2240 | 0.7602 | 0.1961 | -2.7390 | 0.7622 | 2.9350 | -49.3117 | -37.1745 | -2.2839 | -2.4598 |
| 0.0053 | 3.2378 | 2260 | 0.7749 | 0.4526 | -2.6596 | 0.7683 | 3.1122 | -49.2587 | -37.0034 | -2.3020 | -2.4783 |
| 0.0164 | 3.2665 | 2280 | 0.7822 | 0.3732 | -2.6889 | 0.7744 | 3.0621 | -49.2783 | -37.0564 | -2.3030 | -2.4798 |
| 0.0068 | 3.2951 | 2300 | 0.7899 | 0.0399 | -3.1333 | 0.7439 | 3.1732 | -49.5745 | -37.2786 | -2.2989 | -2.4764 |
| 0.0077 | 3.3238 | 2320 | 0.7887 | 0.1155 | -2.9342 | 0.7561 | 3.0497 | -49.4418 | -37.2282 | -2.2970 | -2.4759 |
| 0.0231 | 3.3524 | 2340 | 0.7947 | 0.1411 | -2.8551 | 0.7439 | 2.9963 | -49.3891 | -37.2111 | -2.3068 | -2.4853 |
| 0.0549 | 3.3811 | 2360 | 0.8034 | 0.2132 | -2.9557 | 0.7805 | 3.1689 | -49.4562 | -37.1630 | -2.3121 | -2.4900 |
| 0.0814 | 3.4097 | 2380 | 0.7907 | 0.2118 | -2.9652 | 0.7378 | 3.1770 | -49.4625 | -37.1640 | -2.3046 | -2.4826 |
| 0.0135 | 3.4384 | 2400 | 0.7899 | 0.0287 | -3.1253 | 0.7622 | 3.1540 | -49.5692 | -37.2860 | -2.2945 | -2.4723 |
| 0.0862 | 3.4670 | 2420 | 0.7882 | -0.3170 | -3.6215 | 0.7622 | 3.3045 | -49.9000 | -37.5165 | -2.2759 | -2.4542 |
| 0.0149 | 3.4957 | 2440 | 0.8006 | -0.5581 | -3.7738 | 0.7561 | 3.2157 | -50.0015 | -37.6772 | -2.2578 | -2.4357 |
| 0.0191 | 3.5244 | 2460 | 0.7836 | -0.4308 | -3.6489 | 0.7561 | 3.2181 | -49.9183 | -37.5924 | -2.2620 | -2.4399 |
| 0.0161 | 3.5530 | 2480 | 0.8380 | -0.1969 | -3.3682 | 0.7378 | 3.1712 | -49.7311 | -37.4365 | -2.2813 | -2.4603 |
| 0.0934 | 3.5817 | 2500 | 0.8258 | -0.0412 | -3.3281 | 0.7439 | 3.2869 | -49.7044 | -37.3326 | -2.2867 | -2.4660 |
| 0.0169 | 3.6103 | 2520 | 0.8328 | 0.1485 | -3.1268 | 0.7439 | 3.2753 | -49.5702 | -37.2062 | -2.2883 | -2.4670 |
| 0.0831 | 3.6390 | 2540 | 0.8336 | -0.0808 | -3.2812 | 0.7317 | 3.2004 | -49.6731 | -37.3591 | -2.2800 | -2.4587 |
| 0.2622 | 3.6676 | 2560 | 0.8307 | 0.2177 | -3.0623 | 0.7378 | 3.2800 | -49.5272 | -37.1601 | -2.2951 | -2.4736 |
| 0.0038 | 3.6963 | 2580 | 0.8253 | 0.1987 | -3.0051 | 0.75 | 3.2038 | -49.4891 | -37.1727 | -2.2973 | -2.4753 |
| 0.173 | 3.7249 | 2600 | 0.8292 | 0.1861 | -3.0668 | 0.7195 | 3.2529 | -49.5302 | -37.1811 | -2.3006 | -2.4795 |
| 0.1305 | 3.7536 | 2620 | 0.8336 | -0.2786 | -3.5522 | 0.7561 | 3.2735 | -49.8538 | -37.4909 | -2.2721 | -2.4511 |
| 0.0884 | 3.7822 | 2640 | 0.8450 | -0.4424 | -3.6805 | 0.7439 | 3.2381 | -49.9394 | -37.6001 | -2.2600 | -2.4389 |
| 0.0045 | 3.8109 | 2660 | 0.8323 | -0.2425 | -3.6836 | 0.7561 | 3.4412 | -49.9414 | -37.4668 | -2.2674 | -2.4463 |
| 0.0152 | 3.8395 | 2680 | 0.8460 | 0.0591 | -3.2821 | 0.7683 | 3.3412 | -49.6738 | -37.2658 | -2.2876 | -2.4676 |
| 0.0079 | 3.8682 | 2700 | 0.8276 | -0.0039 | -3.4071 | 0.7439 | 3.4032 | -49.7571 | -37.3078 | -2.2765 | -2.4552 |
| 0.1328 | 3.8968 | 2720 | 0.8264 | -0.0002 | -3.3186 | 0.7378 | 3.3184 | -49.6981 | -37.3053 | -2.2873 | -2.4683 |
| 0.0312 | 3.9255 | 2740 | 0.8063 | -0.0827 | -3.4566 | 0.7561 | 3.3739 | -49.7901 | -37.3603 | -2.2762 | -2.4554 |
| 0.0038 | 3.9542 | 2760 | 0.8245 | 0.0086 | -3.3572 | 0.75 | 3.3658 | -49.7238 | -37.2994 | -2.2894 | -2.4694 |
| 0.0096 | 3.9828 | 2780 | 0.8281 | -0.2458 | -3.5377 | 0.7439 | 3.2920 | -49.8442 | -37.4690 | -2.2806 | -2.4604 |
| 0.0019 | 4.0115 | 2800 | 0.8470 | -0.1289 | -3.4632 | 0.7561 | 3.3343 | -49.7945 | -37.3911 | -2.2872 | -2.4679 |
| 0.0037 | 4.0401 | 2820 | 0.8506 | -0.0267 | -3.5451 | 0.7622 | 3.5183 | -49.8491 | -37.3230 | -2.2949 | -2.4761 |
| 0.0041 | 4.0688 | 2840 | 0.8633 | -0.2368 | -3.8138 | 0.7317 | 3.5770 | -50.0282 | -37.4631 | -2.2923 | -2.4749 |
| 0.0013 | 4.0974 | 2860 | 0.8747 | -0.3139 | -3.9316 | 0.7561 | 3.6177 | -50.1067 | -37.5144 | -2.2942 | -2.4760 |
| 0.0027 | 4.1261 | 2880 | 0.8936 | -0.5221 | -4.1944 | 0.75 | 3.6724 | -50.2820 | -37.6532 | -2.2923 | -2.4744 |
| 0.0028 | 4.1547 | 2900 | 0.8900 | -0.4697 | -4.2019 | 0.75 | 3.7322 | -50.2870 | -37.6183 | -2.2950 | -2.4776 |
| 0.0026 | 4.1834 | 2920 | 0.8899 | -0.5699 | -4.3045 | 0.7256 | 3.7346 | -50.3554 | -37.6851 | -2.3000 | -2.4839 |
| 0.004 | 4.2120 | 2940 | 0.8926 | -0.4792 | -4.2045 | 0.75 | 3.7253 | -50.2887 | -37.6247 | -2.3152 | -2.4986 |
| 0.0022 | 4.2407 | 2960 | 0.9133 | -0.1876 | -4.0201 | 0.7378 | 3.8326 | -50.1658 | -37.4302 | -2.3286 | -2.5117 |
| 0.0015 | 4.2693 | 2980 | 0.9147 | -0.1140 | -3.9325 | 0.7622 | 3.8185 | -50.1073 | -37.3812 | -2.3393 | -2.5221 |
| 0.002 | 4.2980 | 3000 | 0.9264 | -0.1440 | -4.0713 | 0.7378 | 3.9274 | -50.1999 | -37.4012 | -2.3416 | -2.5263 |
| 0.0027 | 4.3266 | 3020 | 0.9465 | -0.2334 | -4.1554 | 0.7683 | 3.9220 | -50.2560 | -37.4608 | -2.3426 | -2.5273 |
| 0.0128 | 4.3553 | 3040 | 0.9607 | -0.3812 | -4.3053 | 0.75 | 3.9241 | -50.3559 | -37.5593 | -2.3323 | -2.5168 |
| 0.0092 | 4.3840 | 3060 | 0.9387 | -0.6329 | -4.7010 | 0.7622 | 4.0681 | -50.6197 | -37.7271 | -2.3207 | -2.5059 |
| 0.0009 | 4.4126 | 3080 | 0.9478 | -0.6562 | -4.7956 | 0.7561 | 4.1394 | -50.6827 | -37.7427 | -2.3221 | -2.5068 |
| 0.0047 | 4.4413 | 3100 | 0.9456 | -0.3176 | -4.6057 | 0.7561 | 4.2881 | -50.5561 | -37.5169 | -2.3561 | -2.5425 |
| 0.0033 | 4.4699 | 3120 | 0.9556 | -0.2730 | -4.4728 | 0.75 | 4.1998 | -50.4676 | -37.4872 | -2.3550 | -2.5408 |
| 0.0017 | 4.4986 | 3140 | 0.9745 | -0.3027 | -4.5593 | 0.75 | 4.2565 | -50.5252 | -37.5070 | -2.3487 | -2.5341 |
| 0.0009 | 4.5272 | 3160 | 0.9731 | -0.3404 | -4.5470 | 0.7561 | 4.2067 | -50.5170 | -37.5321 | -2.3525 | -2.5386 |
| 0.1341 | 4.5559 | 3180 | 0.9740 | -0.2885 | -4.5137 | 0.7805 | 4.2252 | -50.4948 | -37.4975 | -2.3507 | -2.5355 |
| 0.0139 | 4.5845 | 3200 | 0.9846 | -0.4084 | -4.6203 | 0.7439 | 4.2119 | -50.5659 | -37.5775 | -2.3554 | -2.5410 |
| 0.001 | 4.6132 | 3220 | 0.9964 | -0.4146 | -4.5785 | 0.7683 | 4.1639 | -50.5380 | -37.5816 | -2.3523 | -2.5390 |
| 0.0019 | 4.6418 | 3240 | 0.9914 | -0.3992 | -4.6578 | 0.7744 | 4.2586 | -50.5909 | -37.5713 | -2.3588 | -2.5460 |
| 0.0003 | 4.6705 | 3260 | 0.9859 | -0.4765 | -4.6712 | 0.7561 | 4.1947 | -50.5998 | -37.6229 | -2.3567 | -2.5430 |
| 0.0022 | 4.6991 | 3280 | 0.9858 | -0.5399 | -4.8405 | 0.75 | 4.3006 | -50.7127 | -37.6651 | -2.3437 | -2.5302 |
| 0.0016 | 4.7278 | 3300 | 0.9901 | -0.3900 | -4.7788 | 0.7561 | 4.3888 | -50.6715 | -37.5652 | -2.3604 | -2.5482 |
| 0.0015 | 4.7564 | 3320 | 0.9915 | -0.4472 | -4.7529 | 0.7439 | 4.3057 | -50.6543 | -37.6033 | -2.3556 | -2.5419 |
| 0.0068 | 4.7851 | 3340 | 1.0009 | -0.6228 | -4.8092 | 0.7683 | 4.1864 | -50.6918 | -37.7204 | -2.3566 | -2.5435 |
| 0.0435 | 4.8138 | 3360 | 0.9982 | -0.3929 | -4.6009 | 0.7561 | 4.2080 | -50.5529 | -37.5671 | -2.3650 | -2.5530 |
| 0.0041 | 4.8424 | 3380 | 1.0030 | -0.4106 | -4.7219 | 0.7378 | 4.3113 | -50.6336 | -37.5790 | -2.3571 | -2.5437 |
| 0.0031 | 4.8711 | 3400 | 0.9665 | -0.2183 | -4.4817 | 0.7683 | 4.2634 | -50.4735 | -37.4507 | -2.3633 | -2.5496 |
| 0.0008 | 4.8997 | 3420 | 0.9724 | -0.3910 | -4.5774 | 0.7622 | 4.1864 | -50.5373 | -37.5659 | -2.3572 | -2.5446 |
| 0.0019 | 4.9284 | 3440 | 0.9784 | -0.3079 | -4.5903 | 0.75 | 4.2823 | -50.5459 | -37.5105 | -2.3559 | -2.5432 |
| 0.0004 | 4.9570 | 3460 | 0.9802 | -0.4062 | -4.6542 | 0.7622 | 4.2479 | -50.5885 | -37.5760 | -2.3582 | -2.5461 |
| 0.0063 | 4.9857 | 3480 | 0.9698 | -0.3503 | -4.7231 | 0.75 | 4.3728 | -50.6344 | -37.5387 | -2.3594 | -2.5471 |
| 0.0013 | 5.0143 | 3500 | 0.9783 | -0.0355 | -4.3472 | 0.7439 | 4.3116 | -50.3838 | -37.3289 | -2.3741 | -2.5616 |
| 0.0051 | 5.0430 | 3520 | 0.9935 | -0.2275 | -4.5304 | 0.75 | 4.3029 | -50.5060 | -37.4569 | -2.3734 | -2.5612 |
| 0.0014 | 5.0716 | 3540 | 0.9793 | -0.4280 | -4.7133 | 0.7561 | 4.2854 | -50.6279 | -37.5905 | -2.3615 | -2.5493 |
| 0.0009 | 5.1003 | 3560 | 0.9986 | -0.5060 | -4.9080 | 0.7561 | 4.4020 | -50.7577 | -37.6425 | -2.3665 | -2.5552 |
| 0.0004 | 5.1289 | 3580 | 0.9914 | -0.5134 | -4.8579 | 0.7683 | 4.3445 | -50.7243 | -37.6474 | -2.3756 | -2.5645 |
| 0.0001 | 5.1576 | 3600 | 1.0033 | -0.2291 | -4.6954 | 0.75 | 4.4663 | -50.6160 | -37.4580 | -2.3894 | -2.5772 |
| 0.0009 | 5.1862 | 3620 | 1.0033 | -0.2055 | -4.7371 | 0.7683 | 4.5316 | -50.6438 | -37.4422 | -2.3936 | -2.5816 |
| 0.0008 | 5.2149 | 3640 | 1.0176 | -0.3229 | -4.8411 | 0.7439 | 4.5181 | -50.7131 | -37.5205 | -2.3866 | -2.5744 |
| 0.0005 | 5.2436 | 3660 | 1.0224 | -0.4097 | -4.9712 | 0.7622 | 4.5615 | -50.7998 | -37.5783 | -2.3865 | -2.5749 |
| 0.0003 | 5.2722 | 3680 | 1.0403 | -0.5507 | -5.1750 | 0.7744 | 4.6243 | -50.9357 | -37.6723 | -2.3794 | -2.5682 |
| 0.0005 | 5.3009 | 3700 | 1.0228 | -0.4905 | -5.1947 | 0.7805 | 4.7042 | -50.9488 | -37.6322 | -2.3795 | -2.5682 |
| 0.0006 | 5.3295 | 3720 | 1.0378 | -0.6007 | -5.2590 | 0.7683 | 4.6584 | -50.9917 | -37.7056 | -2.3796 | -2.5696 |
| 0.0005 | 5.3582 | 3740 | 1.0296 | -0.4368 | -5.1702 | 0.7622 | 4.7334 | -50.9325 | -37.5964 | -2.3879 | -2.5789 |
| 0.0007 | 5.3868 | 3760 | 1.0324 | -0.4259 | -5.1876 | 0.7683 | 4.7617 | -50.9441 | -37.5891 | -2.3880 | -2.5777 |
| 0.0072 | 5.4155 | 3780 | 1.0398 | -0.4193 | -5.1073 | 0.7683 | 4.6879 | -50.8905 | -37.5847 | -2.3883 | -2.5773 |
| 0.0002 | 5.4441 | 3800 | 1.0492 | -0.2759 | -5.0575 | 0.7805 | 4.7816 | -50.8574 | -37.4891 | -2.4041 | -2.5945 |
| 0.0002 | 5.4728 | 3820 | 1.0587 | -0.3267 | -5.1022 | 0.7622 | 4.7754 | -50.8871 | -37.5230 | -2.4098 | -2.6004 |
| 0.0005 | 5.5014 | 3840 | 1.0631 | -0.2539 | -5.1368 | 0.7561 | 4.8829 | -50.9102 | -37.4744 | -2.4090 | -2.5992 |
| 0.0002 | 5.5301 | 3860 | 1.0372 | -0.4144 | -5.2424 | 0.7744 | 4.8280 | -50.9807 | -37.5815 | -2.4013 | -2.5924 |
| 0.0004 | 5.5587 | 3880 | 1.0495 | -0.4714 | -5.3312 | 0.7561 | 4.8597 | -51.0398 | -37.6195 | -2.3924 | -2.5821 |
| 0.001 | 5.5874 | 3900 | 1.0486 | -0.4782 | -5.3662 | 0.7805 | 4.8880 | -51.0631 | -37.6240 | -2.3974 | -2.5880 |
| 0.001 | 5.6160 | 3920 | 1.0506 | -0.3607 | -5.2072 | 0.7744 | 4.8465 | -50.9571 | -37.5456 | -2.4071 | -2.5976 |
| 0.0006 | 5.6447 | 3940 | 1.0483 | -0.3132 | -5.1293 | 0.7378 | 4.8161 | -50.9052 | -37.5140 | -2.4116 | -2.6014 |
| 0.0005 | 5.6734 | 3960 | 1.0553 | -0.3917 | -5.2145 | 0.7439 | 4.8228 | -50.9620 | -37.5664 | -2.4092 | -2.5993 |
| 0.0013 | 5.7020 | 3980 | 1.0599 | -0.4594 | -5.3379 | 0.7805 | 4.8785 | -51.0443 | -37.6115 | -2.4045 | -2.5949 |
| 0.0005 | 5.7307 | 4000 | 1.0585 | -0.6683 | -5.4484 | 0.7683 | 4.7802 | -51.1180 | -37.7507 | -2.3965 | -2.5871 |
| 0.0051 | 5.7593 | 4020 | 1.0491 | -0.7331 | -5.5061 | 0.7744 | 4.7730 | -51.1564 | -37.7939 | -2.4022 | -2.5943 |
| 0.0011 | 5.7880 | 4040 | 1.0520 | -0.5599 | -5.5201 | 0.7683 | 4.9602 | -51.1657 | -37.6785 | -2.4005 | -2.5917 |
| 0.0013 | 5.8166 | 4060 | 1.0609 | -0.5051 | -5.4035 | 0.75 | 4.8985 | -51.0880 | -37.6419 | -2.4058 | -2.5975 |
| 0.0018 | 5.8453 | 4080 | 1.0655 | -0.5281 | -5.4475 | 0.7683 | 4.9194 | -51.1173 | -37.6572 | -2.4031 | -2.5946 |
| 0.0005 | 5.8739 | 4100 | 1.0503 | -0.4433 | -5.3728 | 0.75 | 4.9295 | -51.0676 | -37.6007 | -2.4080 | -2.6002 |
| 0.0002 | 5.9026 | 4120 | 1.0647 | -0.5539 | -5.4257 | 0.7744 | 4.8718 | -51.1028 | -37.6744 | -2.4030 | -2.5951 |
| 0.0003 | 5.9312 | 4140 | 1.0795 | -0.6022 | -5.4344 | 0.7622 | 4.8321 | -51.1086 | -37.7067 | -2.3998 | -2.5919 |
| 0.0006 | 5.9599 | 4160 | 1.0599 | -0.5806 | -5.4356 | 0.7622 | 4.8550 | -51.1094 | -37.6923 | -2.3980 | -2.5898 |
| 0.0008 | 5.9885 | 4180 | 1.0733 | -0.4986 | -5.4546 | 0.7561 | 4.9560 | -51.1221 | -37.6376 | -2.4061 | -2.5986 |
| 0.0002 | 6.0172 | 4200 | 1.0704 | -0.5107 | -5.4161 | 0.7622 | 4.9054 | -51.0964 | -37.6457 | -2.4134 | -2.6052 |
| 0.0001 | 6.0458 | 4220 | 1.0739 | -0.5080 | -5.4632 | 0.75 | 4.9552 | -51.1278 | -37.6439 | -2.4164 | -2.6087 |
| 0.0006 | 6.0745 | 4240 | 1.0734 | -0.4802 | -5.4571 | 0.7622 | 4.9769 | -51.1237 | -37.6253 | -2.4160 | -2.6087 |
| 0.0003 | 6.1032 | 4260 | 1.0684 | -0.4813 | -5.4922 | 0.7622 | 5.0109 | -51.1472 | -37.6260 | -2.4203 | -2.6129 |
| 0.0001 | 6.1318 | 4280 | 1.0806 | -0.4602 | -5.5021 | 0.7439 | 5.0419 | -51.1538 | -37.6120 | -2.4205 | -2.6129 |
| 0.0003 | 6.1605 | 4300 | 1.0743 | -0.4804 | -5.4534 | 0.7439 | 4.9730 | -51.1213 | -37.6255 | -2.4226 | -2.6156 |
| 0.0002 | 6.1891 | 4320 | 1.0910 | -0.4338 | -5.4512 | 0.7805 | 5.0173 | -51.1198 | -37.5944 | -2.4259 | -2.6187 |
| 0.0002 | 6.2178 | 4340 | 1.1028 | -0.4035 | -5.4744 | 0.7561 | 5.0709 | -51.1353 | -37.5742 | -2.4228 | -2.6155 |
| 0.0002 | 6.2464 | 4360 | 1.0730 | -0.3543 | -5.4536 | 0.7683 | 5.0993 | -51.1214 | -37.5414 | -2.4284 | -2.6212 |
| 0.0002 | 6.2751 | 4380 | 1.0913 | -0.3756 | -5.4334 | 0.7378 | 5.0578 | -51.1079 | -37.5556 | -2.4294 | -2.6227 |
| 0.0002 | 6.3037 | 4400 | 1.0897 | -0.4736 | -5.5099 | 0.7378 | 5.0363 | -51.1590 | -37.6209 | -2.4300 | -2.6230 |
| 0.0003 | 6.3324 | 4420 | 1.1019 | -0.5148 | -5.5269 | 0.7439 | 5.0122 | -51.1703 | -37.6484 | -2.4225 | -2.6153 |
| 0.0001 | 6.3610 | 4440 | 1.0869 | -0.4746 | -5.6385 | 0.7744 | 5.1639 | -51.2447 | -37.6216 | -2.4261 | -2.6193 |
| 0.0002 | 6.3897 | 4460 | 1.0844 | -0.4900 | -5.6492 | 0.7561 | 5.1592 | -51.2518 | -37.6318 | -2.4273 | -2.6204 |
| 0.0001 | 6.4183 | 4480 | 1.0832 | -0.4725 | -5.6053 | 0.7622 | 5.1328 | -51.2226 | -37.6202 | -2.4249 | -2.6183 |
| 0.0001 | 6.4470 | 4500 | 1.0951 | -0.4556 | -5.6636 | 0.7683 | 5.2080 | -51.2614 | -37.6089 | -2.4292 | -2.6222 |
| 0.0 | 6.4756 | 4520 | 1.0866 | -0.5072 | -5.6601 | 0.7805 | 5.1529 | -51.2591 | -37.6433 | -2.4246 | -2.6164 |
| 0.0002 | 6.5043 | 4540 | 1.0916 | -0.4715 | -5.7196 | 0.7683 | 5.2481 | -51.2988 | -37.6195 | -2.4297 | -2.6236 |
| 0.0009 | 6.5330 | 4560 | 1.0964 | -0.4816 | -5.6407 | 0.7622 | 5.1591 | -51.2461 | -37.6263 | -2.4245 | -2.6171 |
| 0.0002 | 6.5616 | 4580 | 1.1214 | -0.4599 | -5.6562 | 0.7561 | 5.1962 | -51.2565 | -37.6118 | -2.4333 | -2.6261 |
| 0.0002 | 6.5903 | 4600 | 1.1098 | -0.4649 | -5.7015 | 0.7744 | 5.2366 | -51.2867 | -37.6151 | -2.4341 | -2.6274 |
| 0.0004 | 6.6189 | 4620 | 1.1121 | -0.5644 | -5.7521 | 0.7622 | 5.1877 | -51.3204 | -37.6815 | -2.4334 | -2.6263 |
| 0.0 | 6.6476 | 4640 | 1.1109 | -0.4577 | -5.6589 | 0.7622 | 5.2012 | -51.2583 | -37.6103 | -2.4342 | -2.6273 |
| 0.0004 | 6.6762 | 4660 | 1.1142 | -0.4737 | -5.6591 | 0.7561 | 5.1854 | -51.2584 | -37.6210 | -2.4339 | -2.6276 |
| 0.0002 | 6.7049 | 4680 | 1.1061 | -0.5055 | -5.7155 | 0.7561 | 5.2100 | -51.2960 | -37.6422 | -2.4306 | -2.6235 |
| 0.0002 | 6.7335 | 4700 | 1.0880 | -0.4853 | -5.6204 | 0.75 | 5.1352 | -51.2326 | -37.6287 | -2.4356 | -2.6296 |
| 0.0002 | 6.7622 | 4720 | 1.1035 | -0.5173 | -5.6735 | 0.7622 | 5.1563 | -51.2680 | -37.6500 | -2.4328 | -2.6259 |
| 0.0003 | 6.7908 | 4740 | 1.1121 | -0.4845 | -5.6930 | 0.7439 | 5.2086 | -51.2810 | -37.6282 | -2.4305 | -2.6240 |
| 0.0002 | 6.8195 | 4760 | 1.1118 | -0.4717 | -5.6904 | 0.7378 | 5.2187 | -51.2793 | -37.6197 | -2.4357 | -2.6295 |
| 0.0005 | 6.8481 | 4780 | 1.1005 | -0.5040 | -5.7272 | 0.7866 | 5.2232 | -51.3038 | -37.6412 | -2.4367 | -2.6293 |
| 0.0015 | 6.8768 | 4800 | 1.1215 | -0.5113 | -5.7074 | 0.7561 | 5.1961 | -51.2906 | -37.6460 | -2.4320 | -2.6248 |
| 0.0004 | 6.9054 | 4820 | 1.1107 | -0.4717 | -5.6918 | 0.7561 | 5.2200 | -51.2802 | -37.6197 | -2.4320 | -2.6256 |
| 0.0003 | 6.9341 | 4840 | 1.1153 | -0.5086 | -5.7519 | 0.75 | 5.2433 | -51.3203 | -37.6442 | -2.4312 | -2.6243 |
| 0.0005 | 6.9628 | 4860 | 1.1147 | -0.5398 | -5.7740 | 0.7561 | 5.2341 | -51.3350 | -37.6651 | -2.4292 | -2.6225 |
| 0.0003 | 6.9914 | 4880 | 1.1151 | -0.4804 | -5.7762 | 0.7683 | 5.2958 | -51.3365 | -37.6255 | -2.4291 | -2.6220 |
| 0.0001 | 7.0201 | 4900 | 1.1117 | -0.5175 | -5.7645 | 0.7439 | 5.2469 | -51.3287 | -37.6502 | -2.4347 | -2.6280 |
| 0.0002 | 7.0487 | 4920 | 1.1236 | -0.5374 | -5.8409 | 0.7561 | 5.3035 | -51.3796 | -37.6634 | -2.4315 | -2.6250 |
| 0.0002 | 7.0774 | 4940 | 1.1032 | -0.5376 | -5.7386 | 0.7561 | 5.2010 | -51.3114 | -37.6636 | -2.4378 | -2.6317 |
| 0.0001 | 7.1060 | 4960 | 1.1131 | -0.5442 | -5.8413 | 0.75 | 5.2971 | -51.3799 | -37.6680 | -2.4334 | -2.6275 |
| 0.0001 | 7.1347 | 4980 | 1.1208 | -0.5429 | -5.7231 | 0.7683 | 5.1802 | -51.3011 | -37.6671 | -2.4385 | -2.6326 |
| 0.0003 | 7.1633 | 5000 | 1.0966 | -0.5211 | -5.7483 | 0.7561 | 5.2273 | -51.3179 | -37.6526 | -2.4344 | -2.6286 |
| 0.0001 | 7.1920 | 5020 | 1.1016 | -0.5223 | -5.7960 | 0.7561 | 5.2737 | -51.3497 | -37.6534 | -2.4340 | -2.6274 |
| 0.0003 | 7.2206 | 5040 | 1.1043 | -0.4846 | -5.8107 | 0.7622 | 5.3261 | -51.3595 | -37.6283 | -2.4340 | -2.6278 |
| 0.0001 | 7.2493 | 5060 | 1.1027 | -0.5569 | -5.8161 | 0.7439 | 5.2592 | -51.3631 | -37.6765 | -2.4315 | -2.6248 |
| 0.0002 | 7.2779 | 5080 | 1.1172 | -0.5091 | -5.6916 | 0.7622 | 5.1825 | -51.2801 | -37.6446 | -2.4367 | -2.6307 |
| 0.0001 | 7.3066 | 5100 | 1.1024 | -0.4273 | -5.7785 | 0.75 | 5.3512 | -51.3380 | -37.5901 | -2.4370 | -2.6312 |
| 0.0001 | 7.3352 | 5120 | 1.1125 | -0.5374 | -5.7627 | 0.7744 | 5.2253 | -51.3275 | -37.6634 | -2.4369 | -2.6304 |
| 0.0001 | 7.3639 | 5140 | 1.1132 | -0.5510 | -5.7399 | 0.75 | 5.1889 | -51.3123 | -37.6725 | -2.4394 | -2.6336 |
| 0.0002 | 7.3926 | 5160 | 1.1260 | -0.5236 | -5.7101 | 0.7622 | 5.1865 | -51.2924 | -37.6542 | -2.4368 | -2.6302 |
| 0.0001 | 7.4212 | 5180 | 1.1230 | -0.5582 | -5.7407 | 0.7622 | 5.1825 | -51.3128 | -37.6773 | -2.4317 | -2.6248 |
| 0.0002 | 7.4499 | 5200 | 1.1276 | -0.4874 | -5.7521 | 0.7561 | 5.2648 | -51.3204 | -37.6301 | -2.4420 | -2.6360 |
| 0.0002 | 7.4785 | 5220 | 1.1213 | -0.5305 | -5.7599 | 0.75 | 5.2294 | -51.3256 | -37.6589 | -2.4400 | -2.6343 |
| 0.0002 | 7.5072 | 5240 | 1.1124 | -0.5311 | -5.8026 | 0.7561 | 5.2716 | -51.3541 | -37.6592 | -2.4364 | -2.6296 |
| 0.0002 | 7.5358 | 5260 | 1.0987 | -0.5151 | -5.8600 | 0.7683 | 5.3449 | -51.3924 | -37.6486 | -2.4374 | -2.6315 |
| 0.0003 | 7.5645 | 5280 | 1.1274 | -0.5296 | -5.7695 | 0.7683 | 5.2399 | -51.3320 | -37.6583 | -2.4365 | -2.6296 |
| 0.0001 | 7.5931 | 5300 | 1.1037 | -0.5168 | -5.8267 | 0.75 | 5.3099 | -51.3701 | -37.6497 | -2.4417 | -2.6353 |
| 0.0002 | 7.6218 | 5320 | 1.1130 | -0.5122 | -5.7933 | 0.7622 | 5.2811 | -51.3479 | -37.6467 | -2.4371 | -2.6304 |
| 0.0001 | 7.6504 | 5340 | 1.1096 | -0.5304 | -5.8626 | 0.7622 | 5.3322 | -51.3941 | -37.6588 | -2.4390 | -2.6329 |
| 0.0002 | 7.6791 | 5360 | 1.1134 | -0.5563 | -5.7787 | 0.7683 | 5.2225 | -51.3382 | -37.6761 | -2.4387 | -2.6325 |
| 0.0001 | 7.7077 | 5380 | 1.1191 | -0.4964 | -5.7361 | 0.7805 | 5.2397 | -51.3098 | -37.6361 | -2.4442 | -2.6387 |
| 0.0001 | 7.7364 | 5400 | 1.1221 | -0.5278 | -5.8344 | 0.7561 | 5.3066 | -51.3753 | -37.6571 | -2.4389 | -2.6327 |
| 0.0001 | 7.7650 | 5420 | 1.1255 | -0.4974 | -5.7959 | 0.7439 | 5.2986 | -51.3496 | -37.6368 | -2.4378 | -2.6311 |
| 0.0002 | 7.7937 | 5440 | 1.1317 | -0.5313 | -5.8578 | 0.7683 | 5.3266 | -51.3909 | -37.6594 | -2.4405 | -2.6341 |
| 0.0002 | 7.8223 | 5460 | 1.1171 | -0.5022 | -5.7745 | 0.75 | 5.2723 | -51.3353 | -37.6400 | -2.4417 | -2.6355 |
| 0.0001 | 7.8510 | 5480 | 1.1041 | -0.5059 | -5.8136 | 0.7683 | 5.3077 | -51.3614 | -37.6424 | -2.4416 | -2.6356 |
| 0.0001 | 7.8797 | 5500 | 1.1202 | -0.4504 | -5.6939 | 0.7561 | 5.2434 | -51.2816 | -37.6055 | -2.4399 | -2.6339 |
| 0.0001 | 7.9083 | 5520 | 1.1202 | -0.5155 | -5.7020 | 0.7561 | 5.1865 | -51.2870 | -37.6488 | -2.4392 | -2.6333 |
| 0.0002 | 7.9370 | 5540 | 1.1255 | -0.4878 | -5.8076 | 0.7683 | 5.3198 | -51.3574 | -37.6304 | -2.4410 | -2.6350 |
| 0.0001 | 7.9656 | 5560 | 1.1222 | -0.4668 | -5.8181 | 0.7683 | 5.3513 | -51.3644 | -37.6164 | -2.4407 | -2.6347 |
| 0.0008 | 7.9943 | 5580 | 1.1130 | -0.4448 | -5.8140 | 0.7622 | 5.3692 | -51.3617 | -37.6017 | -2.4410 | -2.6351 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.20.3
|
darkshapes/Flux.1-lite-8B-MLX-Q4
|
darkshapes
| 2025-06-10T17:51:17Z | 0 | 0 | null |
[
"text-to-image",
"en",
"dataset:freepik",
"base_model:Freepik/flux.1-lite-8B",
"base_model:finetune:Freepik/flux.1-lite-8B",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-10T17:19:10Z |
---
name: flux.1-lite-8B-MLX-Q4
base_model: Freepik/flux.1-lite-8B
license: other
pipeline_tag: text-to-image
tasks :
- text-to-image
- image-generation
language: en
datasets: freepik
funded_by: Fal
hardware_type: 64 H100s
cloud_region :
- Western Europe
- East/Midwstern US
cloud_provider: Nebius
hours_used: 2880
get_started_code: uvx -from mflux mflux-generate --base-model --steps 50 --guidance 4.0 --seed 10 --width 1024 --height 1024 -q 4 --model darkshapes/flux.1-lite-8B-MLX-Q4 --prompt 'Test Prompt' --prompt 'Test prompt'
---
> [!IMPORTANT]
> Original Model Link : [https://huggingface.co/Freepik/flux.1-lite-8B](https://huggingface.co/Freepik/flux.1-lite-8B)
>
```
name: flux.1-lite-8B-MLX-Q4
base_model: Freepik/flux.1-lite-8B
license: other
pipeline_tag: text-to-image
tasks :
- text-to-image
- image-generation
language: en
datasets: freepik
funded_by: Fal
hardware_type: 64 H100s
cloud_region :
- Western Europe
- East/Midwstern US
cloud_provider: Nebius
hours_used: 2880
get_started_code: uvx --from mflux mflux-generate --base-model dev --steps 50 --guidance 4.0 --seed 10 --width 1024 --height 1024 -q 4 --model darkshapes/flux.1-lite-8B-MLX-Q4 --prompt 'Test Prompt' --prompt 'Test prompt'
```
# flux.1-lite-8B
flux.1-lite-8B is an 8 billion parameter image synthesis model, a Diffusion Transformer derived from black-forest-labs/Flux.1-dev. The training data for this model originates from ~100k licensed stock images from Freepik.com. Separate self and cross attention layers, embedding extraction from intermediate T5 layers, and rotary positional embedding processes enabled an efficient model construction while maintaining quality performance. This model excels with illustrative and vector styles in particular.
> [!WARNING]
> MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)
> [!NOTE]
> Generation using uv https://docs.astral.sh/uv/**:
> ```
> uvx --from mflux mflux-generate --base-model dev --steps 50 --guidance 4.0 --seed 10 --width 1024 --height 1024 -q 4 --model darkshapes/flux.1-lite-8B-MLX-Q4 --prompt 'Test Prompt'
>```
> [!NOTE]
> Generation using pip:
> ```
> pipx --from mflux mflux-generate --base-model dev --steps 50 --guidance 4.0 --seed 10 --width 1024 --height 1024 -q 4 --model darkshapes/flux.1-lite-8B-MLX-Q4 --prompt 'Test Prompt'
> ```
|
riswanahamed/Rick-Morty-LLM-F16-GGUF
|
riswanahamed
| 2025-06-10T17:49:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:riswanahamed/Rick-Morty-LLM",
"base_model:quantized:riswanahamed/Rick-Morty-LLM",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T17:49:17Z |
---
base_model: riswanahamed/Rick-Morty-LLM
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---
# riswanahamed/Rick-Morty-LLM-F16-GGUF
This LoRA adapter was converted to GGUF format from [`riswanahamed/Rick-Morty-LLM`](https://huggingface.co/riswanahamed/Rick-Morty-LLM) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/riswanahamed/Rick-Morty-LLM) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Rick-Morty-LLM-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Rick-Morty-LLM-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
bruhzair/prototype-0.4x112
|
bruhzair
| 2025-06-10T17:48:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T17:28:35Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x112
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/prototype-0.4x102 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
* /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
* /workspace/cache/models--ArliAI--Llama-3.3-70B-ArliAI-RPMax-v2/snapshots/3a47eabeb5861db09dad26fcf0fb0d57114e40d3
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
select_topk: 0.4
- model: /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
parameters:
select_topk: 0.4
- model: /workspace/cache/models--ArliAI--Llama-3.3-70B-ArliAI-RPMax-v2/snapshots/3a47eabeb5861db09dad26fcf0fb0d57114e40d3
parameters:
select_topk: 0.4
- model: /workspace/prototype-0.4x102
parameters:
select_topk: 0.45
base_model: /workspace/prototype-0.4x102
merge_method: sce
tokenizer:
source: base
chat_template: llama3
int8_mask: true
dtype: bfloat16
```
|
MetaphoricalCode/Gemma-3-Glitter-27B-exl3-4bpw-hb6
|
MetaphoricalCode
| 2025-06-10T17:48:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"mergekit",
"merge",
"conversational",
"base_model:allura-org/Gemma-3-Glitter-27B",
"base_model:quantized:allura-org/Gemma-3-Glitter-27B",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl3",
"region:us"
] |
image-text-to-text
| 2025-06-10T17:19:19Z |
---
base_model:
- allura-org/Gemma-3-Glitter-27B
base_model_relation: quantized
library_name: transformers
tags:
- mergekit
- merge
---
## Quantized using the default exllamav3 (0.0.3) quantization process.
- Original model: https://huggingface.co/allura-org/Gemma-3-Glitter-27B
- exllamav3: https://github.com/turboderp-org/exllamav3
---
# ✨G3 Glitter 27B✨
<figure>
<img src="https://huggingface.co/ToastyPigeon/Gemma-3-Glitter-27B/resolve/main/ComfyUI_02512_.png" width="600">
</figure>
A creative writing model based on Gemma 3 27B.
[Columbidae/gemma-3-27b-half](https://huggingface.co/Columbidae/gemma-3-27b-half), a 50/50 merge of 27B IT and 27B PT, was used as the base model. (This was done because of the success of [Starshine](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B), a 50/50 IT and PT merge.)
The inclusion of PT model does weaken the instruct, but it also weakens the censorship/hesitancy to participate in certain fictional stories. The prose also becomes more natural with less of the IT model included.
**This model does better with short and to-the-point prompts. Long, detailed system prompts will often confuse it.** (Tested with 1000-2000 token system prompts to lackluster results compared to 100-500 token prompts).
## Instruct Format
Uses Gemma2/3 instruct and context. Like Glitter 12b, this works well with `temp = 1, top-nsigma = 1.5`.
```
<start_of_turn>user
{User messages; can also put sysprompt here to use the built-in g3 training}<end_of_turn>
<start_of_turn>model
{model response}<end_of_turn>
```
|
DebasishDhal99/polish-to-german-toponym-model-opus-mt-pl-de
|
DebasishDhal99
| 2025-06-10T17:47:46Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"pl",
"de",
"dataset:DebasishDhal99/German_Names_Central_And_Eastern_Europe",
"dataset:DebasishDhal99/german-polish-paired-placenames",
"base_model:Helsinki-NLP/opus-mt-pl-de",
"base_model:finetune:Helsinki-NLP/opus-mt-pl-de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-05-04T17:47:29Z |
---
library_name: transformers
datasets:
- DebasishDhal99/German_Names_Central_And_Eastern_Europe
- DebasishDhal99/german-polish-paired-placenames
language:
- pl
- de
base_model:
- Helsinki-NLP/opus-mt-pl-de
pipeline_tag: translation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Input = Polish toponym (say Stare Miasto, literally Old city)
Output = Equivalent toponym (say Altstadt, meaning Old city)
Table of sample outputs at the bottom
# Inference Code
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_path = "DebasishDhal99/polish-to-german-toponym-model-opus-mt-pl-de"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForSeq2SeqLM.from_pretrained(model_path).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_path)
polish_name = "Stare miasteczko" #Change this to any polish place name
inputs = tokenizer(polish_name, return_tensors="pt", padding=True, truncation=True)
inputs = {k: v.to(device) for k, v in inputs.items()}
with torch.no_grad():
outputs = model.generate(**inputs, max_length=50)
german_name = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(german_name)
```
## Model Details
- Total epochs = 10
- Loss data
- Epoch 1/10, Loss: 0.1758
- Epoch 2/10, Loss: 0.0997
- Epoch 3/10, Loss: 0.0810
- Epoch 4/10, Loss: 0.0673
- Epoch 5/10, Loss: 0.0556
- Epoch 6/10, Loss: 0.0455
- Epoch 7/10, Loss: 0.0364
- Epoch 8/10, Loss: 0.0298
- Epoch 9/10, Loss: 0.0246
- Epoch 10/10, Loss: 0.0197
- Time = Approx. 30 minutes
- Device = 1 × P100 (Available on Kaggle)
- Further training is needed for better performance, I'll make one more such model with more epochs.
## Output Samples
| Polish Input | German Output | Notes |
|---------------------|------------------|---------------------------------------------------------|
| Warszawa | Warschau | Accurate, Capital of Poland |
| Kraków | Krakau | Accurate
| Poznań | Posen | Accurate |
| Stare Miasteczko | Ebersberg | Inaccurate, "Stare Miasteczko" means "Old Town" |
| Stary rynek | Altmarker | Accurate, means "Old Market" |
| Szczecin | Stettin | Accurate, Historic name for Szczecin |
| Olsztyn | Ellerstein | Inaccurate, correct name is "Allenstein" |
| Rybowo | Riebowen | Inaccurate, Fischdorf would be more accurate |
| Głogowo | Gögenhagen | Inaccurate, historical translation is Glogau |
| Wrocław | Breslau | Accurate, Historic German name for Wrocław |
|
luciehmct/MNLP_M3_quantized_model
|
luciehmct
| 2025-06-10T17:46:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-06-10T17:46:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xanderhuang/material_refiner
|
xanderhuang
| 2025-06-10T17:45:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-10T17:25:34Z |
---
license: apache-2.0
---
|
felerminoali/toucan-base-V2-vmw-pt
|
felerminoali
| 2025-06-10T17:42:06Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:UBC-NLP/toucan-base",
"base_model:finetune:UBC-NLP/toucan-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-10T15:50:55Z |
---
library_name: transformers
base_model: UBC-NLP/toucan-base
tags:
- generated_from_trainer
model-index:
- name: toucan-base-V2-vmw-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toucan-base-V2-vmw-pt
This model is a fine-tuned version of [UBC-NLP/toucan-base](https://huggingface.co/UBC-NLP/toucan-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 2.10.1
- Tokenizers 0.21.1
|
cam-1000/MNLP_M3_rag_model_v4
|
cam-1000
| 2025-06-10T17:40:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:sophiargh/MNLP_M3_mcqa_model",
"base_model:finetune:sophiargh/MNLP_M3_mcqa_model",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T14:16:59Z |
---
library_name: transformers
license: apache-2.0
base_model: sophiargh/MNLP_M3_mcqa_model
tags:
- generated_from_trainer
model-index:
- name: MNLP_M3_rag_model_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M3_rag_model_v4
This model is a fine-tuned version of [sophiargh/MNLP_M3_mcqa_model](https://huggingface.co/sophiargh/MNLP_M3_mcqa_model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7279 | 0.0228 | 100 | 0.8089 |
| 0.7328 | 0.0456 | 200 | 0.7907 |
| 0.584 | 0.0683 | 300 | 0.8175 |
| 0.6671 | 0.0911 | 400 | 0.8707 |
### Framework versions
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Mohibrehman31/custom-head-men-gemma-2-2b-p1
|
Mohibrehman31
| 2025-06-10T17:39:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"arxiv:1910.09700",
"base_model:google/gemma-2-2b",
"base_model:adapter:google/gemma-2-2b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-10T17:38:54Z |
---
base_model: google/gemma-2-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
IqraEval/Iqra_wavlm_base
|
IqraEval
| 2025-06-10T17:37:20Z | 0 | 0 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T08:33:02Z |
---
license: apache-2.0
language:
- en
---
## Testing the Model
To test the model, you can use the Colab notebook linked below:
[▶️ Open in Google Colab](https://colab.research.google.com/drive/1kUVsgEzhrB1ujr85SArNoUcKJh-jHdza?usp=sharing)
|
thomas-erhart/ablation_Qwen3_17B_fullmodel
|
thomas-erhart
| 2025-06-10T17:36:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-10T17:36:47Z |
---
license: apache-2.0
---
|
gradientrouting-spar/mc6_badmed_kl_div_beta_kl-10_epochs-3_seed_1_epoch_3
|
gradientrouting-spar
| 2025-06-10T17:36:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T17:35:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sengi/llada
|
sengi
| 2025-06-10T17:35:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llada",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2025-06-10T17:29:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manuross1/prltztlz4k
|
manuross1
| 2025-06-10T17:34:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-10T14:15:54Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: prltztlz4k
---
# Prltztlz4K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `prltztlz4k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "prltztlz4k",
"lora_weights": "https://huggingface.co/manuross1/prltztlz4k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('manuross1/prltztlz4k', weight_name='lora.safetensors')
image = pipeline('prltztlz4k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4300
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/manuross1/prltztlz4k/discussions) to add images that show off what you’ve made with this LoRA.
|
bcsandlund/grpo-16-game-2
|
bcsandlund
| 2025-06-10T17:33:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T17:33:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
timarni/qwen3_reasoning_sft_268
|
timarni
| 2025-06-10T17:32:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:timarni/reasoning_SFT",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T17:31:18Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
datasets:
- timarni/reasoning_SFT
model-index:
- name: outputs/qwen3_reasoning_sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: Qwen/Qwen3-0.6B-Base
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
strict: false
chat_template: qwen3
datasets:
- path: timarni/reasoning_SFT
type: chat_template
split: train
field_messages: conversations
# message_property_mappings:
# role: from
# content: value
val_set_size: 0.1
output_dir: ./outputs/qwen3_reasoning_sft
dataset_prepared_path: last_run_prepared
# To be sure that no LORA is done
adapter: null
lora: false
merge_lora: false
sequence_len: 4096 #2048
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_watch:
wandb_name: qwen3_reasoning_sft
wandb_log_model:
gradient_accumulation_steps: 2 # 16 following https://unsloth.ai/blog/qwen3
micro_batch_size: 1 # 2
num_epochs: 6
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00005 # 0.0002
cosine_min_lr_ratio: 0.1
bf16: auto
tf32: true
gradient_checkpointing: offload
logging_steps: 1
gradient_clipping: 1.0
flash_attention: true
warmup_ratio: 0.03
evals_per_epoch: 4
saves_per_epoch: 2
save_total_limit: 25
weight_decay: 1e-4
special_tokens:
```
</details><br>
# outputs/qwen3_reasoning_sft
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on the timarni/reasoning_SFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 47
- num_epochs: 6.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.965 | 0.0037 | 1 | 0.8999 |
| 0.8101 | 0.2505 | 67 | 0.7453 |
| 0.6077 | 0.5009 | 134 | 0.7342 |
| 0.5874 | 0.7514 | 201 | 0.7270 |
| 0.4362 | 1.0 | 268 | 0.7260 |
| 0.6779 | 1.2505 | 335 | 0.7269 |
| 0.505 | 1.5009 | 402 | 0.7310 |
| 0.4969 | 1.7514 | 469 | 0.7274 |
| 0.309 | 2.0 | 536 | 0.7332 |
| 0.5954 | 2.2505 | 603 | 0.7428 |
| 0.4302 | 2.5009 | 670 | 0.7514 |
| 0.4301 | 2.7514 | 737 | 0.7491 |
| 0.23 | 3.0 | 804 | 0.7559 |
| 0.5296 | 3.2505 | 871 | 0.7683 |
| 0.3761 | 3.5009 | 938 | 0.7857 |
| 0.3916 | 3.7514 | 1005 | 0.7818 |
| 0.1842 | 4.0 | 1072 | 0.7863 |
| 0.4926 | 4.2505 | 1139 | 0.7980 |
| 0.3469 | 4.5009 | 1206 | 0.8004 |
| 0.3697 | 4.7514 | 1273 | 0.7908 |
| 0.1665 | 5.0 | 1340 | 0.7925 |
| 0.4773 | 5.2505 | 1407 | 0.8187 |
| 0.3364 | 5.5009 | 1474 | 0.8071 |
| 0.3622 | 5.7514 | 1541 | 0.8020 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
|
cragtmp/3add2gt6-650
|
cragtmp
| 2025-06-10T17:30:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct",
"region:us"
] | null | 2025-06-10T17:29:55Z |
---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
mohammadmahdinouri/expressive-distilled-interleaved-speech
|
mohammadmahdinouri
| 2025-06-10T17:30:02Z | 999 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-15T14:49:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaiAhmed/medgemma-4b-it-sft-lora-flare-multi-classification
|
MaiAhmed
| 2025-06-10T17:29:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T16:43:11Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-flare-multi-classification
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-flare-multi-classification
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaiAhmed/medgemma-4b-it-sft-lora-flare-multi-classification", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mai-cs/huggingface/runs/poz13ldk)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.51.3
- Pytorch: 2.3.1+cu118
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
manuross1/prltztlzx4k
|
manuross1
| 2025-06-10T17:28:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-10T16:46:33Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: prltztlzx4k
---
# Prltztlzx4K
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `prltztlzx4k` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "prltztlzx4k",
"lora_weights": "https://huggingface.co/manuross1/prltztlzx4k/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('manuross1/prltztlzx4k', weight_name='lora.safetensors')
image = pipeline('prltztlzx4k').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 4300
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/manuross1/prltztlzx4k/discussions) to add images that show off what you’ve made with this LoRA.
|
efraimdahl/RagtimeMetric_enc_vcond_lowgt
|
efraimdahl
| 2025-06-10T17:28:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T15:13:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
timarni/qwen3_reasoning_sft
|
timarni
| 2025-06-10T17:26:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:timarni/reasoning_SFT",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T17:26:17Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-0.6B-Base
tags:
- generated_from_trainer
datasets:
- timarni/reasoning_SFT
model-index:
- name: outputs/qwen3_reasoning_sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.9.2`
```yaml
base_model: Qwen/Qwen3-0.6B-Base
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
strict: false
chat_template: qwen3
datasets:
- path: timarni/reasoning_SFT
type: chat_template
split: train
field_messages: conversations
# message_property_mappings:
# role: from
# content: value
val_set_size: 0.1
output_dir: ./outputs/qwen3_reasoning_sft
dataset_prepared_path: last_run_prepared
# To be sure that no LORA is done
adapter: null
lora: false
merge_lora: false
sequence_len: 4096 #2048
sample_packing: true
eval_sample_packing: true
pad_to_sequence_len: true
wandb_project: mnlp_project
wandb_entity: tim-arni
wandb_watch:
wandb_name: qwen3_reasoning_sft
wandb_log_model:
gradient_accumulation_steps: 2 # 16 following https://unsloth.ai/blog/qwen3
micro_batch_size: 1 # 2
num_epochs: 6
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00005 # 0.0002
cosine_min_lr_ratio: 0.1
bf16: auto
tf32: true
gradient_checkpointing: offload
logging_steps: 1
gradient_clipping: 1.0
flash_attention: true
warmup_ratio: 0.03
evals_per_epoch: 4
saves_per_epoch: 2
save_total_limit: 25
weight_decay: 1e-4
special_tokens:
```
</details><br>
# outputs/qwen3_reasoning_sft
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on the timarni/reasoning_SFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 47
- num_epochs: 6.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.965 | 0.0037 | 1 | 0.8999 |
| 0.8101 | 0.2505 | 67 | 0.7453 |
| 0.6077 | 0.5009 | 134 | 0.7342 |
| 0.5874 | 0.7514 | 201 | 0.7270 |
| 0.4362 | 1.0 | 268 | 0.7260 |
| 0.6779 | 1.2505 | 335 | 0.7269 |
| 0.505 | 1.5009 | 402 | 0.7310 |
| 0.4969 | 1.7514 | 469 | 0.7274 |
| 0.309 | 2.0 | 536 | 0.7332 |
| 0.5954 | 2.2505 | 603 | 0.7428 |
| 0.4302 | 2.5009 | 670 | 0.7514 |
| 0.4301 | 2.7514 | 737 | 0.7491 |
| 0.23 | 3.0 | 804 | 0.7559 |
| 0.5296 | 3.2505 | 871 | 0.7683 |
| 0.3761 | 3.5009 | 938 | 0.7857 |
| 0.3916 | 3.7514 | 1005 | 0.7818 |
| 0.1842 | 4.0 | 1072 | 0.7863 |
| 0.4926 | 4.2505 | 1139 | 0.7980 |
| 0.3469 | 4.5009 | 1206 | 0.8004 |
| 0.3697 | 4.7514 | 1273 | 0.7908 |
| 0.1665 | 5.0 | 1340 | 0.7925 |
| 0.4773 | 5.2505 | 1407 | 0.8187 |
| 0.3364 | 5.5009 | 1474 | 0.8071 |
| 0.3622 | 5.7514 | 1541 | 0.8020 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
|
vidyc/direct_dpo_gemini_m1_open_trl_20k_step_dpo_no_ref
|
vidyc
| 2025-06-10T17:26:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T17:25:40Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
soundTeam/Magistral-Small-2506_mlx-8bpw
|
soundTeam
| 2025-06-10T17:25:47Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mlx-community/Magistral-Small-2506-bf16",
"base_model:quantized:mlx-community/Magistral-Small-2506-bf16",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-06-10T17:08:12Z |
---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: mlx
inference: false
base_model: mlx-community/Magistral-Small-2506-bf16
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text-generation
tags:
- mlx
---
# soundTeam/Magistral-Small-2506_mlx-8bpw
This model [soundTeam/Magistral-Small-2506_mlx-8bpw](https://huggingface.co/soundTeam/Magistral-Small-2506_mlx-8bpw) was
converted to MLX format from [mlx-community/Magistral-Small-2506-bf16](https://huggingface.co/mlx-community/Magistral-Small-2506-bf16)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("soundTeam/Magistral-Small-2506_mlx-8bpw")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
JuliusBelmont123/results
|
JuliusBelmont123
| 2025-06-10T17:24:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-10T17:24:36Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8350
- Matthews Correlation: 0.5229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4925 | 1.0 | 1069 | 0.4670 | 0.4664 |
| 0.3304 | 2.0 | 2138 | 0.6078 | 0.5030 |
| 0.2344 | 3.0 | 3207 | 0.8350 | 0.5229 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
OpenBuddy/OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview0-QAT
|
OpenBuddy
| 2025-06-10T17:23:26Z | 18 | 16 | null |
[
"safetensors",
"qwen3",
"text-generation",
"conversational",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-08T04:41:26Z |
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
license: apache-2.0
tags:
- qwen3
pipeline_tag: text-generation
base_model: Qwen/Qwen3-32B
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Model Info
Base Model: Qwen/Qwen3-32B
Context Length: 40K Tokens
License: Apache 2.0
Training Data: Distilled from DeepSeek-R1-0528
# Prompt Format
We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`.
```
<|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user).
Current mode: System 2, think step-by-step and answer.<|end|>
<|role|>user<|says|>History input 1<|end|>
<|role|>assistant<|says|>History output 1<|end|>
<|role|>user<|says|>History input 2<|end|>
<|role|>assistant<|says|>History output 2<|end|>
<|role|>user<|says|>Current input<|end|>
<|role|>assistant<|says|>
```
This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html).
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
|
mdmy/qwen2-vl-nutritiontable-vision-text-v1
|
mdmy
| 2025-06-10T17:21:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T17:21:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PerryP/test_bash
|
PerryP
| 2025-06-10T17:19:52Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"model_hub_mixin",
"8-bit",
"region:us"
] | null | 2025-06-10T17:10:09Z |
---
tags:
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
MinaMila/llama_instbase_unlearned_ug2_e-6_1.0_0.5_0.25_0.25_ep2_LoRa_ACSEmployment_2_ep1_22
|
MinaMila
| 2025-06-10T17:19:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T17:19:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ccpfoye/MNLP_M3_quantized_model
|
ccpfoye
| 2025-06-10T17:19:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-10T17:19:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sergioalves/36d6d0dd-6168-44f8-a2da-f228348074f6
|
sergioalves
| 2025-06-10T17:18:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-10T16:19:11Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 36d6d0dd-6168-44f8-a2da-f228348074f6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 0ffdcbfcce196a00_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.8
group_by_length: false
hub_model_id: sergioalves/36d6d0dd-6168-44f8-a2da-f228348074f6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 300
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/0ffdcbfcce196a00_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1bfe6bc8-12b6-46c8-8f8c-44314bcdb12f
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 1bfe6bc8-12b6-46c8-8f8c-44314bcdb12f
warmup_steps: 30
weight_decay: 0.05
xformers_attention: true
```
</details><br>
# 36d6d0dd-6168-44f8-a2da-f228348074f6
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.0449 | 0.0001 | 1 | 1.1435 |
| 4.1282 | 0.0205 | 150 | 1.0765 |
| 3.6345 | 0.0409 | 300 | 1.0508 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
morturr/Llama-3.1-8B-amazon-2025-06-10
|
morturr
| 2025-06-10T17:18:36Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-06-09T22:13:05Z |
---
library_name: peft
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-3.1-8B-amazon-2025-06-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-amazon-2025-06-10
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
morturr/Mistral-7B-v0.1-LOO_headlines-COMB_one_liners-comb2-seed18-2025-06-10
|
morturr
| 2025-06-10T17:18:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-10T17:17:59Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-LOO_headlines-COMB_one_liners-comb2-seed18-2025-06-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-LOO_headlines-COMB_one_liners-comb2-seed18-2025-06-10
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
KiteAether/khmer-gemma3-06-10-epoch1-lr5e5-cosine
|
KiteAether
| 2025-06-10T17:17:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T17:16:22Z |
---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: khmer-gemma3-06-10-epoch1-lr5e5-cosine
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for khmer-gemma3-06-10-epoch1-lr5e5-cosine
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="KiteAether/khmer-gemma3-06-10-epoch1-lr5e5-cosine", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chrollo215-personal/huggingface/runs/5v90dqx7)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
krish10/Qwen3_14B_16bit_Intervention_screen
|
krish10
| 2025-06-10T17:17:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-14B",
"base_model:finetune:unsloth/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T17:04:55Z |
---
base_model: unsloth/Qwen3-14B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** krish10
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
joyce8/EMBER2024-benchmark-models
|
joyce8
| 2025-06-10T17:17:06Z | 0 | 0 | null |
[
"arxiv:2506.05074",
"license:apache-2.0",
"region:us"
] | null | 2025-06-06T13:27:52Z |
---
license: apache-2.0
---
# EMBER2024 Classifiers
This repository includes 14 LightGBM classifiers trained on different malware analysis tasks, including malicious/benign file detection, malware family identification, and malware behavior prediction. These classifiers were trained and evaluated using the EMBER2024 dataset.
Paper about the classifiers: [https://arxiv.org/pdf/2506.05074](https://arxiv.org/pdf/2506.05074)
GitHub repo for the EMBER2024 dataset: [https://github.com/FutureComputing4AI/EMBER2024/](https://github.com/FutureComputing4AI/EMBER2024/)
## Citing
If you use EMBER2024 in your own research, please cite it using:
```
@inproceedings{joyce2025ember,
title={EMBER2024 - A Benchmark Dataset for Holistic Evaluation of Malware Classifiers},
author={Robert J. Joyce and Gideon Miller and Phil Roth and Richard Zak and Elliott Zaresky-Williams and Hyrum Anderson and Edward Raff and James Holt},
year={2025},
booktitle={Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
}
|
reza-rgb/M3_attempt_balanced
|
reza-rgb
| 2025-06-10T17:17:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T17:15:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kristenzhang/qwen2.5-3b-math-grpo-epoch1
|
kristenzhang
| 2025-06-10T17:15:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-10T16:46:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
happyme531/Qwen3-Embedding-RKLLM
|
happyme531
| 2025-06-10T17:15:08Z | 0 | 0 | null |
[
"rkllm",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"license:agpl-3.0",
"region:us"
] | null | 2025-06-09T06:58:27Z |
---
license: agpl-3.0
base_model:
- Qwen/Qwen3-Embedding-0.6B
- Qwen/Qwen3-Embedding-4B
- Qwen/Qwen3-Embedding-8B
tags:
- rkllm
---
|
BeaverAI/Cydonia-24B-v3f-GGUF
|
BeaverAI
| 2025-06-10T17:15:06Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T16:54:00Z |
Quick test on Cydonia + Magistral
|
TheGardener/KD-MLP-pruned-llama3.2-0.83B-epoch-1st-ver3
|
TheGardener
| 2025-06-10T17:13:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T17:13:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dgambettaphd/M_llm2_run0_gen8_WXS_doc1000_synt64_lr1e-04_acm_SYNALL
|
dgambettaphd
| 2025-06-10T17:11:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-10T17:11:27Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wanhin/Qwen2.5-7B-Instruct_1e_fullfinetune
|
wanhin
| 2025-06-10T17:11:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T16:40:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
piksis18/mcqa_ftd1
|
piksis18
| 2025-06-10T17:10:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T17:06:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pp-beta/loratest
|
pp-beta
| 2025-06-10T17:10:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"region:us"
] | null | 2025-06-10T17:06:42Z |
---
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
akseljoonas/Agentic-Qwen-3B-e7-lr4-b128
|
akseljoonas
| 2025-06-10T17:08:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:akseljoonas/codeagent-traces-answers",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T16:24:46Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: akseljoonas/codeagent-traces-answers
library_name: transformers
model_name: Agentic-Qwen-3B-e7-lr4-b128
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Agentic-Qwen-3B-e7-lr4-b128
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [akseljoonas/codeagent-traces-answers](https://huggingface.co/datasets/akseljoonas/codeagent-traces-answers) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="akseljoonas/Agentic-Qwen-3B-e7-lr4-b128", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/akseljoonas-university-of-groningen/huggingface/runs/txqwvl96)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nathunt1996/09a82f68-9500-4738-b2c7-93abbf061cc7
|
nathunt1996
| 2025-06-10T17:07:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:THUDM/agentlm-70b",
"base_model:adapter:THUDM/agentlm-70b",
"region:us"
] | null | 2025-06-10T17:04:11Z |
---
base_model: THUDM/agentlm-70b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
potsu-potsu/medembed-base-mrl
|
potsu-potsu
| 2025-06-10T17:05:09Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4012",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-10T17:05:04Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4012
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Do cephalopods use RNA editing less frequently than other species?
sentences:
- 'Extensive messenger RNA editing generates transcript and protein diversity in
genes involved in neural excitability, as previously described, as well as in
genes participating in a broad range of other cellular functions. '
- GV1001 is a 16-amino-acid vaccine peptide derived from the human telomerase reverse
transcriptase sequence. It has been developed as a vaccine against various cancers.
- Using acetyl-specific K516 antibodies, we show that acetylation of endogenous
S6K1 at this site is potently induced upon growth factor stimulation. We propose
that K516 acetylation may serve to modulate important kinase-independent functions
of S6K1 in response to growth factor signalling. Following mitogen stimulation,
S6Ks interact with the p300 and p300/CBP-associated factor (PCAF) acetyltransferases.
S6Ks can be acetylated by p300 and PCAF in vitro and S6K acetylation is detected
in cells expressing p300
- source_sentence: Can pets affect infant microbiomed?
sentences:
- Yes, exposure to household furry pets influences the gut microbiota of infants.
- Thiazovivin is a selective small molecule that directly targets Rho-associated
kinase (ROCK) and increases expression of pluripotency factors.
- ' Here, we present evidence that the calcium/calmodulin-dependent protein kinase
IV (CaMK4) is increased and required during Th17 cell differentiation. Inhibition
of CaMK4 reduced Il17 transcription through decreased activation of the cAMP response
element modulator a (CREM-a) and reduced activation of the AKT/mTOR pathway, which
is known to enhance Th17 differentiation. CAMK4 knockdown and kinase-dead mutant
inhibited crocin-mediated HO-1 expression, Nrf2 activation, and phosphorylation
of Akt, indicating that HO-1 expression is mediated by CAMK4 and that Akt is a
downstream mediator of CAMK4 in crocin signaling'
- source_sentence: In what proportion of children with heart failure has Enalapril
been shown to be safe and effective?
sentences:
- 5-HT2A (5-hydroxytryptamine type 2a) receptor can be evaluated with the [18F]altanserin.
- "In children with heart failure evidence of the effect of enalapril is empirical.\
\ Enalapril was clinically safe and effective in 50% to 80% of for children with\
\ cardiac failure secondary to congenital heart malformations before and after\
\ cardiac surgery, impaired ventricular function , valvar regurgitation, congestive\
\ cardiomyopathy, , arterial hypertension, life-threatening arrhythmias coexisting\
\ with circulatory insufficiency. \nACE inhibitors have shown a transient beneficial\
\ effect on heart failure due to anticancer drugs and possibly a beneficial effect\
\ in muscular dystrophy-associated cardiomyopathy, which deserves further studies."
- "necroptosis\napoptosis \npro-survival/inflammation NF-κB activation"
- source_sentence: How are SAHFS created?
sentences:
- In particular, up to 17% of neutrophil nuclei of healthy women exhibit a drumstick-shaped
appendage that contains the inactive X chromosome.
- miR-1, miR-133, miR-208a, miR-206, miR-494, miR-146a, miR-222, miR-21, miR-221,
miR-20a, miR-133a, miR-133b, miR-23, miR-107 and miR-181 are involved in exercise
adaptation
- Cellular senescence-associated heterochromatic foci (SAHFS) are a novel type of
chromatin condensation involving alterations of linker histone H1 and linker DNA-binding
proteins. SAHFS can be formed by a variety of cell types, but their mechanism
of action remains unclear.
- source_sentence: What are the effects of the deletion of all three Pcdh clusters
(tricluster deletion) in mice?
sentences:
- Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly.
The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded
by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion
of individual Pcdh clusters had subtle phenotypic consequences, the loss of all
three clusters (tricluster deletion) led to a severe axonal arborization defect
and loss of self-avoidance.
- The myocyte enhancer factor-2 (MEF2) proteins are MADS-box transcription factors
that are essential for differentiation of all muscle lineages but their mechanisms
of action remain largely undefined. MEF2C expression initiates cardiomyogenesis,
resulting in the up-regulation of Brachyury T, bone morphogenetic protein-4, Nkx2-5,
GATA-4, cardiac alpha-actin, and myosin heavy chain expression. Inactivation of
the MEF2C gene causes cardiac developmental arrest and severe downregulation of
a number of cardiac markers including atrial natriuretic factor (ANF). BMP-2,
a regulator of cardiac development during embryogenesis, was shown to increase
PI 3-kinase activity in cardiac precursor cells, resulting in increased expression
of sarcomeric myosin heavy chain (MHC) and MEF-2A. Furthermore, expression of
MEF-2A increased MHC expression in a PI 3-kinase-dependent manner. Other studies
showed that Gli2 and MEF2C proteins form a complex, capable of synergizing on
cardiomyogenesis-related promoters. Dominant interference of calcineurin/mAKAP
binding blunts the increase in MEF2 transcriptional activity seen during myoblast
differentiation, as well as the expression of endogenous MEF2-target genes. These
findings show that MEF-2 can direct early stages of cell differentiation into
a cardiomyogenic pathway.
- Investigators proposed that there have been three extended periods in the evolution
of gene regulatory elements. Early vertebrate evolution was characterized by regulatory
gains near transcription factors and developmental genes, but this trend was replaced
by innovations near extracellular signaling genes, and then innovations near posttranslational
protein modifiers.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: Biomedical MRL
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.8500707213578501
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9377652050919377
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9504950495049505
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9674681753889675
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8500707213578501
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3125884016973126
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19009900990099007
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09674681753889673
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8500707213578501
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9377652050919377
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9504950495049505
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9674681753889675
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9123173189785756
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8941778361509621
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8951587766172264
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.8486562942008486
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9349363507779349
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9519094766619519
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9674681753889675
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8486562942008486
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3116454502593116
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19038189533239033
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09674681753889672
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8486562942008486
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9349363507779349
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9519094766619519
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9674681753889675
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9119495367876664
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8937164634830831
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8948057981361003
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.8373408769448374
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9278642149929278
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9434229137199435
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9547383309759547
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8373408769448374
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3092880716643093
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18868458274398867
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09547383309759547
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8373408769448374
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9278642149929278
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9434229137199435
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9547383309759547
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9017656707014216
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8841539255966414
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8857155093016021
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.8189533239038189
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9108910891089109
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9278642149929278
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9405940594059405
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8189533239038189
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.30363036303630364
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18557284299858556
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09405940594059405
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8189533239038189
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9108910891089109
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9278642149929278
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9405940594059405
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8856187513669239
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8673553579847783
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.869253499575075
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.7736916548797736
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8882602545968883
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9108910891089109
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.925035360678925
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7736916548797736
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2960867515322961
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18217821782178212
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09250353606789247
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7736916548797736
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8882602545968883
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9108910891089109
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.925035360678925
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8573911656884706
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.834872926068117
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8366311237261763
name: Cosine Map@100
---
# Biomedical MRL
This is a [sentence-transformers](https://www.SBERT.net) model trained on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("potsu-potsu/medembed-base-mrl")
# Run inference
sentences = [
'What are the effects of the deletion of all three Pcdh clusters (tricluster deletion) in mice?',
'Multicluster Pcdh diversity is required for mouse olfactory neural circuit assembly. The vertebrate clustered protocadherin (Pcdh) cell surface proteins are encoded by three closely linked gene clusters (Pcdhα, Pcdhβ, and Pcdhγ). Although deletion of individual Pcdh clusters had subtle phenotypic consequences, the loss of all three clusters (tricluster deletion) led to a severe axonal arborization defect and loss of self-avoidance.',
'Investigators proposed that there have been three extended periods in the evolution of gene regulatory elements. Early vertebrate evolution was characterized by regulatory gains near transcription factors and developmental genes, but this trend was replaced by innovations near extracellular signaling genes, and then innovations near posttranslational protein modifiers.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8501 |
| cosine_accuracy@3 | 0.9378 |
| cosine_accuracy@5 | 0.9505 |
| cosine_accuracy@10 | 0.9675 |
| cosine_precision@1 | 0.8501 |
| cosine_precision@3 | 0.3126 |
| cosine_precision@5 | 0.1901 |
| cosine_precision@10 | 0.0967 |
| cosine_recall@1 | 0.8501 |
| cosine_recall@3 | 0.9378 |
| cosine_recall@5 | 0.9505 |
| cosine_recall@10 | 0.9675 |
| **cosine_ndcg@10** | **0.9123** |
| cosine_mrr@10 | 0.8942 |
| cosine_map@100 | 0.8952 |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8487 |
| cosine_accuracy@3 | 0.9349 |
| cosine_accuracy@5 | 0.9519 |
| cosine_accuracy@10 | 0.9675 |
| cosine_precision@1 | 0.8487 |
| cosine_precision@3 | 0.3116 |
| cosine_precision@5 | 0.1904 |
| cosine_precision@10 | 0.0967 |
| cosine_recall@1 | 0.8487 |
| cosine_recall@3 | 0.9349 |
| cosine_recall@5 | 0.9519 |
| cosine_recall@10 | 0.9675 |
| **cosine_ndcg@10** | **0.9119** |
| cosine_mrr@10 | 0.8937 |
| cosine_map@100 | 0.8948 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8373 |
| cosine_accuracy@3 | 0.9279 |
| cosine_accuracy@5 | 0.9434 |
| cosine_accuracy@10 | 0.9547 |
| cosine_precision@1 | 0.8373 |
| cosine_precision@3 | 0.3093 |
| cosine_precision@5 | 0.1887 |
| cosine_precision@10 | 0.0955 |
| cosine_recall@1 | 0.8373 |
| cosine_recall@3 | 0.9279 |
| cosine_recall@5 | 0.9434 |
| cosine_recall@10 | 0.9547 |
| **cosine_ndcg@10** | **0.9018** |
| cosine_mrr@10 | 0.8842 |
| cosine_map@100 | 0.8857 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.819 |
| cosine_accuracy@3 | 0.9109 |
| cosine_accuracy@5 | 0.9279 |
| cosine_accuracy@10 | 0.9406 |
| cosine_precision@1 | 0.819 |
| cosine_precision@3 | 0.3036 |
| cosine_precision@5 | 0.1856 |
| cosine_precision@10 | 0.0941 |
| cosine_recall@1 | 0.819 |
| cosine_recall@3 | 0.9109 |
| cosine_recall@5 | 0.9279 |
| cosine_recall@10 | 0.9406 |
| **cosine_ndcg@10** | **0.8856** |
| cosine_mrr@10 | 0.8674 |
| cosine_map@100 | 0.8693 |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 64
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7737 |
| cosine_accuracy@3 | 0.8883 |
| cosine_accuracy@5 | 0.9109 |
| cosine_accuracy@10 | 0.925 |
| cosine_precision@1 | 0.7737 |
| cosine_precision@3 | 0.2961 |
| cosine_precision@5 | 0.1822 |
| cosine_precision@10 | 0.0925 |
| cosine_recall@1 | 0.7737 |
| cosine_recall@3 | 0.8883 |
| cosine_recall@5 | 0.9109 |
| cosine_recall@10 | 0.925 |
| **cosine_ndcg@10** | **0.8574** |
| cosine_mrr@10 | 0.8349 |
| cosine_map@100 | 0.8366 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 4,012 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 16.13 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 63.38 tokens</li><li>max: 485 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the implication of histone lysine methylation in medulloblastoma?</code> | <code>Aberrant patterns of H3K4, H3K9, and H3K27 histone lysine methylation were shown to result in histone code alterations, which induce changes in gene expression, and affect the proliferation rate of cells in medulloblastoma.</code> |
| <code>What is the role of STAG1/STAG2 proteins in differentiation?</code> | <code>STAG1/STAG2 proteins are tumour suppressor proteins that suppress cell proliferation and are essential for differentiation.</code> |
| <code>What is the association between cell phone use and glioblastoma?</code> | <code>The association between cell phone use and incident glioblastoma remains unclear. Some studies have reported that cell phone use was associated with incident glioblastoma, and with reduced survival of patients diagnosed with glioblastoma. However, other studies have repeatedly replicated to find an association between cell phone use and glioblastoma.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:-----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| **1.0** | **8** | **-** | **0.9142** | **0.9151** | **0.905** | **0.8892** | **0.8474** |
| 1.2540 | 10 | 26.698 | - | - | - | - | - |
| 2.0 | 16 | - | 0.9120 | 0.9093 | 0.8999 | 0.8869 | 0.8568 |
| 2.5079 | 20 | 11.062 | - | - | - | - | - |
| 3.0 | 24 | - | 0.9116 | 0.9113 | 0.9009 | 0.8849 | 0.8572 |
| 3.7619 | 30 | 9.198 | - | - | - | - | - |
| 4.0 | 32 | - | 0.9123 | 0.9119 | 0.9018 | 0.8856 | 0.8574 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.6
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
viols/MNLP_M3_rag_model
|
viols
| 2025-06-10T17:02:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T17:00:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YaTharThShaRma999/finetunedmodel
|
YaTharThShaRma999
| 2025-06-10T17:00:25Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-24T22:52:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
phospho-app/Starkosaure-ACT_BBOX-Stuffed_Animal_V4.1_3cam_Normal-5kivb
|
phospho-app
| 2025-06-10T16:59:14Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-10T16:55:11Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
max() arg is an empty sequence
```
## Training parameters:
- **Dataset**: [Starkosaure/Stuffed_Animal_V4.1_3cam_Normal](https://huggingface.co/datasets/Starkosaure/Stuffed_Animal_V4.1_3cam_Normal)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
clejordan/MNLP_M3_bnbquantized_final
|
clejordan
| 2025-06-10T16:59:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-10T16:34:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
remioff/MNLP_M2_mcqa_model
|
remioff
| 2025-06-10T16:57:41Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-27T08:18:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MikeRoz/DS-R1-Distill-70B-ArliAI-RpR-v4-Large-exl3
|
MikeRoz
| 2025-06-10T16:55:06Z | 0 | 0 |
exllamav3
|
[
"exllamav3",
"exl3",
"text-generation",
"en",
"base_model:ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large",
"base_model:quantized:ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large",
"license:llama3.3",
"region:us"
] |
text-generation
| 2025-06-10T11:56:55Z |
---
license: llama3.3
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/hIZ2ZcaDyfYLT9Yd4pfOs.jpeg
language:
- en
base_model: ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large
base_model_relation: quantized
library_name: exllamav3
pipeline_tag: text-generation
tags:
- exl3
---
exllamav3 quantizations of ArliAI's [DS-R1-Distill-70B-ArliAI-RpR-v4-Large](https://huggingface.co/ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large)
[2.25bpw h6](https://huggingface.co/MikeRoz/DS-R1-Distill-70B-ArliAI-RpR-v4-Large-exl3/tree/2.25bpw_H6) (20.661 GiB)
[4.25bpw h6](https://huggingface.co/MikeRoz/DS-R1-Distill-70B-ArliAI-RpR-v4-Large-exl3/tree/4.25bpw_H6) (36.598 GiB)
|
BootesVoid/cm9xd46et005urbgielta9gtu_cmavat2gc021nu1cg0u9ddghq
|
BootesVoid
| 2025-06-10T16:54:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-10T16:54:43Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cm9Xd46Et005Urbgielta9Gtu_Cmavat2Gc021Nu1Cg0U9Ddghq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cm9xd46et005urbgielta9gtu_cmavat2gc021nu1cg0u9ddghq/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm9xd46et005urbgielta9gtu_cmavat2gc021nu1cg0u9ddghq', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm9xd46et005urbgielta9gtu_cmavat2gc021nu1cg0u9ddghq/discussions) to add images that show off what you’ve made with this LoRA.
|
dilarayavuz/imdb-stylebkd-p10-roberta-base
|
dilarayavuz
| 2025-06-10T16:54:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-10T16:47:33Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: FacebookAI/roberta-base
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.23878462612628937
f1: 0.9158682241764011
precision: 0.9235044997353097
recall: 0.9083571986461859
auc: 0.9719546002342728
accuracy: 0.9084285714285715
|
morturr/Mistral-7B-v0.1-LOO_amazon-COMB_one_liners-comb3-seed18-2025-06-10
|
morturr
| 2025-06-10T16:51:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-10T16:51:05Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-LOO_amazon-COMB_one_liners-comb3-seed18-2025-06-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-LOO_amazon-COMB_one_liners-comb3-seed18-2025-06-10
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
RamaIrsheidat/Maqsam_PII_without_freezing_V2
|
RamaIrsheidat
| 2025-06-10T16:48:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-10T16:48:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmb6tzbjp06t1lexpdhu2q2s1
|
BootesVoid
| 2025-06-10T16:47:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-10T16:47:54Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LILY
---
# Cmay2E8B8038Bu1Cguoswiyvb_Cmb6Tzbjp06T1Lexpdhu2Q2S1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LILY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LILY",
"lora_weights": "https://huggingface.co/BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmb6tzbjp06t1lexpdhu2q2s1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmb6tzbjp06t1lexpdhu2q2s1', weight_name='lora.safetensors')
image = pipeline('LILY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmay2e8b8038bu1cguoswiyvb_cmb6tzbjp06t1lexpdhu2q2s1/discussions) to add images that show off what you’ve made with this LoRA.
|
Adilbai/CartPole-v1-policy-gradient-RL
|
Adilbai
| 2025-06-10T16:46:01Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-10T16:40:57Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1-policy-gradient-RL
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# CartPole-v1 Policy Gradient Reinforcement Learning Model
## Model Description
This model is a Policy Gradient (REINFORCE) agent trained to solve the CartPole-v1 environment from OpenAI Gym. The agent learns to balance a pole on a cart by taking discrete actions (left or right) to maximize the cumulative reward.
## Model Details
### Model Architecture
- **Algorithm**: REINFORCE (Monte Carlo Policy Gradient)
- **Neural Network**: Simple feedforward network
- Hidden layer size: 16 units
- Activation function: ReLU (typical for policy networks)
- Output layer: Softmax for action probabilities
### Training Configuration
- **Environment**: CartPole-v1 (OpenAI Gym)
- **Training Episodes**: 2,000
- **Max Steps per Episode**: 1,000
- **Learning Rate**: 0.01
- **Discount Factor (γ)**: 1.0 (no discounting)
- **Optimizer**: Adam (PyTorch default)
## Environment Details
**CartPole-v1** is a classic control problem where:
- **Observation Space**: 4-dimensional continuous space
- Cart position: [-4.8, 4.8]
- Cart velocity: [-∞, ∞]
- Pole angle: [-0.418 rad, 0.418 rad]
- Pole angular velocity: [-∞, ∞]
- **Action Space**: 2 discrete actions (0: push left, 1: push right)
- **Reward**: +1 for every step the pole remains upright
- **Episode Termination**:
- Pole angle > ±12°
- Cart position > ±2.4
- Episode length > 500 steps (CartPole-v1 limit)
## Training Process
The model was trained using the REINFORCE algorithm with the following key features:
1. **Return Calculation**: Monte Carlo returns computed using dynamic programming for efficiency
2. **Reward Standardization**: Returns are normalized (zero mean, unit variance) for training stability
3. **Policy Loss**: Negative log-probability weighted by standardized returns
4. **Gradient Update**: Standard backpropagation with Adam optimizer
### Key Implementation Details
- Returns calculated in reverse chronological order for computational efficiency
- Numerical stability ensured by adding epsilon to standard deviation
- Deque data structure used for efficient O(1) operations
## Performance
The model is evaluated over 10 episodes after training. Expected performance:
- **Target**: Consistently achieve scores close to 500 (maximum possible in CartPole-v1)
- **Success Criterion**: Average score > 475 over evaluation episodes
- **Training Stability**: 100-episode rolling average tracked during training
## Usage
```python
# Load the trained policy
policy = torch.load('policy_model.pth')
# Use the policy to select actions
state = env.reset()
action, log_prob = policy.act(state)
```
## Limitations and Considerations
1. **Environment Specific**: Model is specifically trained for CartPole-v1 and won't generalize to other environments
2. **Sample Efficiency**: REINFORCE can be sample inefficient compared to modern policy gradient methods
3. **Variance**: High variance in policy gradient estimates (not using baseline/critic)
4. **Hyperparameter Sensitivity**: Performance may be sensitive to learning rate and network architecture
## Ethical Considerations
This is a simple control task with no ethical implications. The model is designed for:
- Educational purposes in reinforcement learning
- Benchmarking and algorithm development
- Research in policy gradient methods
## Training Environment
- **Framework**: PyTorch
- **Environment**: OpenAI Gym
- **Monitoring**: 100-episode rolling average for performance tracking
## Model Files
- `policy_model.pth`: Trained policy network weights
- `training_scores.pkl`: Training episode scores for analysis
## Citation
If you use this model, please cite:
```bibtex
@misc{cartpole-policy-gradient-2024,
title={CartPole-v1 Policy Gradient Reinforcement Learning Model},
author={Adilbai},
year={2024},
publisher={Hugging Face Hub},
url={https://huggingface.co/Adilbai/CartPole-v1-policy-gradient-RL}
}
```
## References
- Sutton, R. S., & Barto, A. G. (2018). *Reinforcement learning: An introduction*. MIT press.
- Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. *Machine learning*, 8(3-4), 229-256.
- OpenAI Gym CartPole-v1 Environment Documentation
---
*For questions or issues with this model, please open an issue in the repository.*
|
RedHatAI/Magistral-Small-2506-FP8
|
RedHatAI
| 2025-06-10T16:44:49Z | 0 | 1 |
vllm
|
[
"vllm",
"mistral",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Magistral-Small-2506",
"base_model:quantized:mistralai/Magistral-Small-2506",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2025-06-10T16:17:55Z |
---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Magistral-Small-2506
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# Model Card for Magistral-Small-2506-FP8
This checkpoint is the FP8 W8A8 quantized version of mistralai/Magistral-Small-2506, compressed using the Mistral-format integration in LLM Compressor.
It is important to note that this is a Mistral-format checkpoint, so it must be run in vLLM with `--tokenizer-mode mistral --config-format mistral --load-format mistral`.
For instance, serve the model as follows:
```
vllm serve RedHatAI/Magistral-Small-2506-FP8 --tokenizer-mode mistral --config-format mistral --load-format mistral --tool-call-parser mistral --enable-auto-tool-choice
```
## Evaluation
GSM8k:
```
lm_eval --model local-completions --model_args model=RedHatAI/Magistral-Small-2506-FP8,base_url=http://0.0.0.0:9000/v1/completions,num_concurrent=500,tokenized_requests=False --tasks gsm8k --num_fewshot 5
local-completions (model=RedHatAI/Magistral-Small-2506-FP8,base_url=http://0.0.0.0:9000/v1/completions,num_concurrent=500,tokenized_requests=False), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: 1
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.8923|± |0.0085|
| | |strict-match | 5|exact_match|↑ |0.8886|± |0.0087|
```
# Original Model Card
Building upon Mistral Small 3.1 (2503), **with added reasoning capabilities**, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
Learn more about Magistral in our [blog post](https://mistral.ai/news/magistral/).
## Key Features
- **Reasoning:** Capable of long chains of reasoning traces before providing an answer.
- **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 128k context window, **but** performance might degrade past **40k**. Hence we recommend setting the maximum model length to 40k.
## Benchmark Results
| Model | AIME24 pass@1 | AIME25 pass@1 | GPQA Diamond | Livecodebench (v5) |
|-------|-------------|-------------|--------------|-------------------|
| Magistral Medium | 73.59% | 64.95% | 70.83% | 59.36% |
| Magistral Small | 70.68% | 62.76% | 68.18% | 55.84% |
## Sampling parameters
Please make sure to use:
- `top_p`: 0.95
- `temperature`: 0.7
- `max_tokens`: 40960
## Basic Chat Template
We highly recommend including the default system prompt used during RL for the best results, you can edit and customise it if needed for your specific use case.
```
<s>[SYSTEM_PROMPT]system_prompt
A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown to format your response. Write both your thoughts and summary in the same language as the task posed by the user. NEVER use \boxed{} in your response.
Your thinking process must follow the template below:
<think>
Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer.
</think>
Here, provide a concise summary that reflects your reasoning and presents a clear final answer to the user. Don't mention that this is a summary.
Problem:
[/SYSTEM_PROMPT][INST]user_message[/INST]<think>
reasoning_traces
</think>
assistant_response</s>[INST]user_message[/INST]
```
*`system_prompt`, `user_message` and `assistant_response` are placeholders.*
We invite you to choose, depending on your use case and requirements, between keeping reasoning traces during multi-turn interactions or keeping only the final assistant response.
***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***
## Usage
The model can be used with the following frameworks;
### Inference
- [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [below](#vllm)
In addition the community has prepared quantized versions of the model that can be used with the following frameworks (*alphabetically sorted*):
- [`llama.cpp`](https://github.com/ggml-org/llama.cpp): https://huggingface.co/mistralai/Magistral-Small-2506_gguf
- [`lmstudio` (llama.cpp, MLX)](https://lmstudio.ai/): https://lmstudio.ai/models/mistralai/magistral-small
- [`ollama` (llama.cpp)](https://ollama.com/): https://ollama.com/library/magistral
- [`unsloth` (llama.cpp)](https://huggingface.co/unsloth): https://huggingface.co/unsloth/Magistral-Small-2506-GGUF
### Training
Fine-tuning is possible with (*alphabetically sorted*):
- [`axolotl`](https://github.com/axolotl-ai-cloud/axolotl): https://github.com/axolotl-ai-cloud/axolotl/tree/main/examples/magistral
- [`unsloth`](https://github.com/unslothai/unsloth): https://docs.unsloth.ai/basics/magistral
### Other
Also you can use Magistral with:
- [`kaggle`](https://www.kaggle.com/models/mistral-ai/magistral-small-2506): https://www.kaggle.com/models/mistral-ai/magistral-small-2506
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**_Installation_**
Make sure you install the latest [`vLLM`](https://github.com/vllm-project/vllm/) code:
```
pip install -U vllm \
--pre \
--extra-index-url https://wheels.vllm.ai/nightly
```
Doing so should automatically install [`mistral_common >= 1.6.0`](https://github.com/mistralai/mistral-common/releases/tag/v1.6.0).
To check:
```
python -c "import mistral_common; print(mistral_common.__version__)"
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
Serve model as follows:
```
vllm serve mistralai/Magistral-Small-2506 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
```
Ping model as follows:
```py
from openai import OpenAI
from huggingface_hub import hf_hub_download
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 40_960
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
query = "Write 4 sentences, each with at least 8 words. Now make absolutely sure that every sentence has exactly one word less than the previous sentence."
# or try out other queries
# query = "Exactly how many days ago did the French Revolution start? Today is June 4th, 2025."
# query = "Think about 5 random numbers. Verify if you can combine them with addition, multiplication, subtraction or division to 133"
# query = "If it takes 30 minutes to dry 12 T-shirts in the sun, how long does it take to dry 33 T-shirts?"
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": query}
]
stream = client.chat.completions.create(
model=model,
messages=messages,
stream=True,
temperature=TEMP,
top_p=TOP_P,
max_tokens=MAX_TOK,
)
print("client: Start streaming chat completions...")
printed_content = False
for chunk in stream:
content = None
# Check the content is content
if hasattr(chunk.choices[0].delta, "content"):
content = chunk.choices[0].delta.content
if content is not None:
if not printed_content:
printed_content = True
print("\ncontent:", end="", flush=True)
# Extract and print the content
print(content, end="", flush=True)
# content:<think>
# Alright, I need to write 4 sentences where each one has at least 8 words and each subsequent sentence has one fewer word than the previous one.
# ...
# Final boxed answer (the four sentences):
# \[
# \boxed{
# \begin{aligned}
# &\text{1. The quick brown fox jumps over lazy dog and yells hello.} \\
# &\text{2. I saw the cat on the stair with my hat.} \\
# &\text{3. The man in the moon came down quickly today.} \\
# &\text{4. A cat sat on the mat today patiently.}
# \end{aligned}
# }
# \]
```
|
dilarayavuz/imdb-synbkd-p10-roberta-base
|
dilarayavuz
| 2025-06-10T16:44:06Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-10T16:37:25Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: FacebookAI/roberta-base
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.2129785120487213
f1: 0.925641359552025
precision: 0.9276951187679457
recall: 0.9235966735966736
auc: 0.9729829079250081
accuracy: 0.9184285714285715
|
zacbrld/MNLP_M3_rag_model_lastV10
|
zacbrld
| 2025-06-10T16:44:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T16:42:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cragtmp/task1add3-250
|
cragtmp
| 2025-06-10T16:42:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct",
"region:us"
] | null | 2025-06-10T16:41:28Z |
---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Ssdaizi/ReMUD-7B
|
Ssdaizi
| 2025-06-10T16:41:24Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"medical",
"conversational",
"dataset:Ssdaizi/ReMUD",
"arxiv:2506.07837",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-05-14T10:39:27Z |
---
datasets:
- Ssdaizi/ReMUD
license: apache-2.0
tags:
- medical
library_name: transformers
pipeline_tag: image-text-to-text
---
# ReMUD-7B
This repository contains the code and model weights for the paper [HAIBU-ReMUD: Reasoning Multimodal Ultrasound Dataset and Model Bridging to General Specific Domains](https://huggingface.co/papers/2506.07837). The Github repository can be found at https://github.com/ShiDaizi/ReMUD.
## Introduction
ReMUD-7B is a reasoning medical ultrasound multimodal large language model fine-tinued on Qwen2.5-VL-7B-Instruct with [ReMUD](https://huggingface.co/datasets/Ssdaizi/ReMUD) dataset.
## Requirements
The code of ReMUD-7B is based on [Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
You can refer the requirements of [Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
## Quickstart
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
model_name = "Ssdaizi/ReMUD-7B"
device = "cuda:0"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_name, torch_dtype=torch.bfloat16, device_map=device
)
processor = AutoProcessor.from_pretrained(model_name)
def messages_generate(system_prompt: str, image_list: list, prompt: str):
messages = [
{
"role": "system",
"content": system_prompt
}
]
content = []
for image_path in image_list:
content.append(
{
"type": "image",
"image": image_path,
}
)
content.append({"type": "text", "text": prompt})
messages.append({"role": "user", "content": content})
return messages
def inference(messages: list, device: str, temperature=0.6, top_p=0.95, max_new_tokens=512):
# Preparation for inference
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to(device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=max_new_tokens, temperature=temperature, top_p=top_p)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
return output_text
system_prompt = "你是一名人工智能助手,专门研究超声医学领域。你收到了一个超声选择题,请给出你的思考过程,并放在<think>思考过程</think>标签内,只输出一个选项,把选项答案放在<answer>选项</answer>内。"
image_list = [
"./images/238564_1.jpeg",
"./images/238564_2.jpeg"
]
prompt = "肝脏在这次超声检查中的形态是怎样的?A: 缩小 B: 正常 C: 稍饱满 D: 增大
<image>
<image>"
messages = messages_generate(system_prompt=system_prompt, image_list=image_list, prompt=prompt)
output_text = inference(messages=messages, device=device)
```
|
TheGardener/KD-qwen-0.33B-mlp-block-epoch-4th-ver1
|
TheGardener
| 2025-06-10T16:38:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T16:37:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sam017/Reinforce-CartPole-v1
|
Sam017
| 2025-06-10T16:38:01Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-10T16:37:49Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
artdev99/MNLP_M3_document_encoder
|
artdev99
| 2025-06-10T16:36:26Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-10T10:22:54Z |
---
language: en
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
pipeline_tag: sentence-similarity
---
Forked from: https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developed this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developed this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.